top of page

Automated Market Maker

Guide to Automated Market Making


This article is a compilation of information sourced from publicly available resources online. It is intended to be used for educational purposes, and we have referenced the source of the information as much as possible and where appropriate. If you believe that there is a mistake in the information or original source, please get in touch with us so we can rectify it.

What is a Market Maker?

A market maker is a concept native to finance, specifically in trading. It is an entity that engages in market making (no surprise there). The question then, is what is market making? Market making is the conduct of actively quoting and trading both sides of a market - i.e. the buy and sell sides of the USD/JPY forex market. A market maker will put up bids and asks at certain levels in a market, the difference between these levels being the spread. This spread underpins a market maker’s business model, making a profit on the difference between the best bid and ask prices. Typically, market makers will be trading high volumes in a market. They are very important, because they provide liquidity and depth for the markets they operate in, which would otherwise be very illiquid. This type of market activity has evolved over the years, and works well in markets traded using an order book.


In the context of traditional finance, these market maker participants work well to provide liquidity and ensure markets are healthy and liquid. However, in a decentralized context where trading is not done on any order book, typical market making is not possible because there are no bids, asks or spread. Therefore, to enable decentralized trading of assets, a new method of market making is required.

What Are Automated Market Makers?



Automated Market Makers (AMM) are the answer decentralized exchanges have adopted to enable trading on their platforms and ensure healthy and liquid markets. They are an integral part of the DeFi ecosystem. Like traditional market makers, AMMs require access to liquidity on both sides of a market. However, AMMs are not an entity, nor do they own any liquidity, like a traditional market maker. Instead, they are self-executing systems, typically as smart contracts, that have custody over liquidity provided by a collection of users - a liquidity pool. AMMs keep the DeFi ecosystem liquid 24/7 via these liquidity pools. These users are incentivized to provide liquidity to an AMM by the promise of a portion of the fees collected on trades facilitated by the AMM. The use of a self-executing, deterministic system on a blockchain, and outsourcing liquidity, means that users can trade digital assets in a permissionless and automatic way without relying on any other individual party. AMMs and their liquidity pools can be optimized for different purposes, and are proving to me an important mechanism in the DeFi ecosystem. 

On a traditional exchange platform, buyers and sellers offer up different prices for an asset. When other users find a listed price to be acceptable, they execute a trade and that price becomes the asset’s market price. Stocks, gold, real estate, and most other assets rely on this traditional market structure for trading. However, AMMs have a different approach to trading assets.

AMMs are a financial tool unique to decentralized finance (DeFi). This relatively new concept is decentralized, always available for trading, and does not rely on the traditional interaction between buyers and sellers. Because of this, AMMs embody the ideals of Ethereum, crypto, and blockchain technology in general: no one entity controls the system, and anyone can build new solutions and participate.

Liquidity Pools and Liquidity Providers


Liquidity refers to how easily one asset can be converted into another asset, often a fiat currency, without affecting its market price. Before AMMs came into play, liquidity was a challenge for decentralized exchanges (DEXs) on Ethereum. As a new technology with a complicated interface, the number of buyers and sellers was small, which meant it was difficult to find enough people willing to trade on a regular basis. AMMs fix this problem of limited liquidity by creating liquidity pools and offering liquidity providers the incentive to supply these pools with assets. Despite being self-executing code, AMMs have exclusive custody over the assets in the liquidity pool. The more assets in a pool and the more liquidity the pool has, the easier trading becomes on decentralized exchanges.


Instead of trading between buyers and sellers, users trade against the liquidity pool. At its core, a liquidity pool is a shared pot of tokens. Users supply liquidity pools with tokens and the price of the tokens in the pool is determined by a mathematical formula. By tweaking the formula, liquidity pools can be optimized for different purposes.


Anyone with an internet connection and in possession of any type of ERC-20 tokens can become a liquidity provider by supplying tokens to an AMM’s liquidity pool. Liquidity providers typically earn a fee for providing tokens to the pool. This fee is paid by traders who interact with the liquidity pool. Recently, liquidity providers have also been able to earn yield in the form of project tokens through what is known as “yield farming.”

Automated Market Making.
What is a Market Maker?
What Are Automated Market Makers?
Liquidity Pools and Liquidity Providers
AMM Algorithms and the Constant Product Formula

AMMs have become a primary way to trade assets in the DeFi ecosystem, and it all began with a blog post about “on-chain market makers” by Ethereum founder Vitalik Buterin. Rather than quoting either side of an order book like traditional market makers, AMMs use an algorithm to provide traders with a single quote price. The exact algorithm used varies between implementations, but they are the secret ingredient of AMMs.. The most common one - the constant product formula - was proposed by Vitalik as:
tokenA_balance(p) * tokenB_balance(q) = k
and popularized by Uniswap as:
x * y = k
The constant, represented by “k” means there is a constant balance of assets that determines the price of tokens in a liquidity pool. For example, if an AMM has ether (ETH) and bitcoin (BTC), two volatile assets, every time ETH is bought, the price of ETH goes up as there is less ETH in the pool than before the purchase. Conversely, the price of BTC goes down as there is more BTC in the pool. The pool stays in constant balance, where the total value of ETH in the pool will always equal the total value of BTC in the pool. Only when new liquidity providers join in will the pool expand in size. Visually, the prices of tokens in an AMM pool follow a curve determined by the constant product formula.

AMM Algorithms and the Constant Product Formula

In this constant state of balance, buying one ETH brings the price of ETH up slightly along the curve, and selling one ETH brings the price of ETH down slightly along the curve. The opposite happens to the price of BTC in an ETH-BTC pool. It doesn’t matter how volatile the price gets, there will eventually be a return to a state of balance that reflects a relatively accurate market price. If the AMM price ventures too far from market prices on other exchanges, the model incentivizes traders to take advantage of the price differences between the AMM and outside crypto exchanges until it is balanced once again.

Because the price of an asset in a liquidity pool is entirely dependent on the quantity of other asset/s in the pool, it means an AMM is always reactive to market movements. Risk, volatility, or other factors are not even considered. Whereas traditional market makers may take into account hundreds, or thousands, of factors when pricing each side of an order book, AMMs are confined to the parameters in the formula they implement. 

Automated Market Maker Variations

In Vitalik Buterin’s original post calling for automated or on-chain money markets, he emphasized that AMMs should not be the only available option for decentralized trading. Instead, there needed to be many ways to trade tokens, since non-AMM exchanges were vital to keeping AMM prices accurate. What he didn’t foresee, however, was the development of various approaches to AMMs.


The DeFi ecosystem evolves quickly, but three dominant AMM models have emerged: Uniswap, Curve, and Balancer.


  • Uniswap’s pioneering technology allows users to create a liquidity pool with any pair of ERC-20 tokens with a 50/50 ratio (in value, not in quantity necessarily), and has become the most enduring AMM model on Ethereum. 

  • Curve specializes in creating liquidity pools of similar assets such as stablecoins, and as a result, offers some of the lowest rates and most efficient trades in the industry while solving the problem of limited liquidity. 

  • Balancer stretches the limits of Uniswap by allowing users to create dynamic liquidity pools of up to eight different assets in any ratio, thus expanding AMMs’ flexibility.

Although Automated Market Makers harness a new technology, iterations of it have already proven an essential financial instrument in the fast-evolving DeFi ecosystem and a sign of a maturing industry. There now exist many different variations of the first AMM, each attempting to solve a different problem or offer a unique value proposition. Their key difference is in the AMM algorithm they implement. These algorithms alter the curvature distribution of the AMM - the function determining the quote price of an AMM. 

Automated Market Maker Variations
Impermanent Loss

Impermanent loss (“IL”, a.k.a divergence loss) is where the value of a liquidity provider’s stake in a pool goes down as the proportion of assets changes. This can occur despite the income of fees from pool transactions. It is calculated as the difference in value of their liquidity position and a HODL position. The idea underpinning IL is simple: a constant product AMM is constantly purchasing the underperforming asset and selling the outperforming asset. Impermanent Losses is referred to as impermanent because the loss is not realized until the liquidity provider’s position is exited. Alternatively, the loss can be reversed if the relevant token returns to its original value. While it is often claimed trading fees compensate IL, recent studies suggest this is not actually the case.

Impermanent Loss

Concentrated liquidity, something that will be discussed later, fails to avoid IL. In fact, similar to leveraged positions, it increases the exposure to such a risk. However, similar to the idea above, concentrated liquidity positions also accrue more fees which are meant to compensate this higher IL. See here for more information. 

Popular AMMS

This section covers the most popular AMMs that are currently active. From a users perspective, their functionality is similar - these are venues they can use to swap tokens. However, under the hood they operate very differently, and they boast different value propositions. To help explain some of their details, we rely on an article written by Leo Lau: A Mathematical View of Automated Market Maker (AMM) Algorithms and Its Future. This article does a fantastic job covering the mathematical principles that underpin each of these AMM models. For some readers, this coverage may get a bit technical, but do not worry, we try to summarize these concepts at the end of each section.


One of Bancor’s key characteristics, and strongest value proposition, is its single-token exposure for liquidity providers. In the Bancor protocol, a single-token LP position is not converted to the correct ratio in the backend, or some other band-aid solution. This is made possible by the integration of the BNT token. Every Bancor liquidity pool is a token pair with the BNT token. Provided the singe-token deposit is within the permitted amount, Bancor will mint BNT to match that deposit, and the liquidity provider will earn fees for the deposit token-BNT pool. Other than its single-token exposure, Bancor also offers impermanent loss protection which will be discussed below.

Bancor implements a bonding curve to calculate price. A bonding curve is the relationship between the price of a token and its total supply. With Bancor’s network token (BNT), every token is connected to all other tokens via the BNT as an intermediary. Every connection has a different weight, corresponding to different price-determining bonding curves. Below is an extract from Leo Lau that encapsulates exactly how that relationship translates into a pricing mechanism in Bancor.

Popular AMMS

The invariant chosen by Bancor is F, called connector weight, which is the ratio between R (reserve token number in the liquidity pool) and the product of S (BNT total supply outside the liquidity pool) and P (the relative price between BNT and reserve token). We could substitute the equation of P and integrate both sides and get the relation between P and S. It is an exponential expression where the exponent α is related to the connector weight F (F between 0 and 1). The smaller F is, the bigger α will be, which means the price will change more rapidly with respect to BNT’s total supply.

Using this expression and simple integrations, we can derive the relation between T (BNT token bought) and E (reserve token paid), where R0, S0 are the current values of R and S.

If we want to exchange between token A and token B, selling token A for token B. We first need to buy BNT token from pool A, using token A, if we do not have any. Next we need to buy token B from pool B, using BNT. Below are the exact formulas needed to calculate how much token we will receive. The relative price between token A and token B can be expressed in terms of relative price between BNT token and token A / B


This type of automated market maker enables what Bancor calls single-sided liquidity deposition. The amount of liquidity that can be deposited on a single side is limited, however Bancor will supplement equal value of BNT tokens when these types of deposits are made. This increases the available liquidity. The Bancor protocol also implements a form of impermanent loss compensation by minting BNT and giving it to liquidity providers in situations where impermanent loss affects their transaction. This mechanism helps attract liquidity providers looking for more stable income.

The IL compensation mechanism works to incentivize longer term liquidity deposits by offering 100% protection on deposits over 100 days. Liquidity providers who keep their deposit for 30 days will be guaranteed 30% of their deposit back. This protection grows by 1% per day until day 100. However, any IL suffered is first covered by trading fees accrued, and additional BNT will only be used where the fees accrued fail to cover the loss.


Source: Bancor

The unfortunate side effect of Bancor’s model is that all swaps require BNT as an intermediary, or catalyst. This introduces two opportunities to be affected by slippage, and higher gas fees. Furthermore, the minting of BNT for IL compensation and single-sided liquidity deposits affects the price of BNT by increasing supply. 

Liquidity pools are limited to the amount they can receive as a single-sided deposit. Once a pool reaches this limit, new liquidity providers cannot make single-sided deposits until an existing provider withdraws their stake. Lastly, Bancor has no automatic reward/fee claiming, which means no auto-compounding or reinvestment mechanisms. 

Bancor v3 - Out Now!

The goal of Bancor v3, which was announced back in November of 2021, was to remedy these shortcomings identified above, but maintain their value proposition of single-sided liquidity. 


Bancor v3 will feature bn tokens which will represent a liquidity providers stake in a pool, and include their accruing fees. The mechanics of bn tokens will also ensure that IL does not affect the LP’s position. Instead of having many individual pools, v3 will have one “Omnipool” which holds all deposited tokens and a single BNT pool. This single pool will be wrapped in a logic layer which will be responsible for processing swaps, deposits and withdrawals.

Bancor v3 - Out Now!

Source: Bancor

This new design reduces gas fees for swaps, by “virtualizing” the intermediary BNT swap and using pool tokens to calculate an LP’s IL. It also results in a fair distribution of fees, because there is only one pool LPs can deposit to. The v3 architecture also does away with any limitation on single-sided deposits. The Bancor DAO still sets a limit on available liquidity for trading on each token, however, deposits are still accepted above this limit and are used to support the protocol in other ways such as reducing IL impact, or used to accrue yield externally. Bancor v3 has also redesigned their IL protection mechanism. LPs will get 100% IL protection immediately, but will incur a 0.25% withdrawal fee and a 7-day delay.

These benefits promised by Bancor v3 are predicted to increase Bancor’s presence as a viable AMM. It is expected to become a more popular platform for liquidity providers because of the IL protection and unlimited single-sided deposits. The lower gas fees will also likely see platforms like 1Inch route more orders to Bancor, increasing the trading volume on the platform and fees earned by LPs. 


UniSwap is unarguably the most popular AMM currently available. An indicator of its success is perhaps its codebase’s popularity among other AMMs which forked its repo to build a clone. They say imitation is the finest form of flattery. Like other AMMs, UniSwap is made up of liquidity pools. The market making mechanism it employs is called the constant product market maker - as has been already discussed above. On its face, Uniswap V2, in comparison to Bancor, is a much simpler mechanism. 

Below is an extract from Leo Lau’s article, calculating IL in a CPMM pool.

[I]t is not hard to compute the impermanent loss of a single trade with and without fee in Uniswap V2. Suppose the trade changes the price from P to Pk. The impermanent loss, measured in percentage, can be solely expressed as a function of k.



The impermanent loss function IL(k, ρ) derived looks very similar to the impermanent loss function without fee. We can do a sanity check by setting ρ to zero, arriving at the same result. A typical Uniswap V2 fee percentage is ρ = 0.3%. When plotting the impermanent loss function, we can see there is a above-zero part between roughly k = 0.994 to 1 (roughly 2ρ span). In this region, impermanent loss is positive, implying liquidity providers actually gain value (transaction fee earned outperforms loss in this region). By introducing transaction fee, when price moves in a certain range, liquidity providers will have a positive gain.

In the above discussion, we only considered the case where the relative price goes down. We can also calculate the exact range of k in which liquidity providers will have a positive gain.








When ρ is small, the total range of k, considering both conditions (price going up and down), is approximately 4ρ (2ρ each). This means when the price moves within 2ρ from the original price, liquidity providers will have a positive gain. We can also calculate the maximum trading quantity in terms of the token reserve, which not surprisingly, equals to ρ when ρ is small.

Much more can be said on IL but it is beyond the comprehension of many readers, including myself. If you are interested in reading more, the articles from Dave White et al and Jiahua Xu et al are excellent. 

UniSwap v3

In response to these issues, and the inherent capital inefficiencies of v2, Uniswap V3 was released, introducing concentrated liquidity. 

Concentrated liquidity provides LPs with the ability to deposit liquidity in specific price ranges. By providing liquidity in smaller price ranges, LPs can earn more fees when the price is within that range, and reduce the capital inefficiencies that exist in v2 by not spreading liquidity across the range of 0 to infinity. Furthermore, UniSwap v3 also features custom swap fees of 0.05%, 0.3%, and 1%. A fee tier of 0.01% was later introduced as well for stablecoins. This gives LPs even greater flexibility, and more adequately compensates the risk they take on with volatile assets. 

On the topic of UniSwap v3, Leo Lau continues:

UniSwap v3

From Uniswap V3 whitepaper

[These benefits] are achieved by translation of the Uniswap V2 function:


From Uniswap V3 whitepaper

Translating the function downward by the y value of point a, and leftward by the x value of point b, as depicted and described in the figure and equation above, ensures the same effective trading outcome between a and b as if we are using the green curve as our price determining function. When price goes out of this range, one of the token reserves will be sold out, effectively concentrating liquidity to this price range.

For a paper on calculating liquidity distributions in AMMS, see this paper by Dan Robinson.

Where two liquidity providers have provided concentrated liquidity within the same range, these two concentrations can be added together. 

Additionally, when depositing liquidity, unlike in UniSwap v2, the value of each asset supplied by a liquidity provider in a concentrated position is not necessarily equal in UniSwap v3. The two values will only be equal when the price is equal to the geometric mean upper and lower bound of the concentrated position. 

This means that when the market price exits the bounds of the concentrated position, the liquidity provider is only left with one asset in their position (which asset depends on which bound the market price surpassed). If you would like to know more about the concentrated liquidity mechanism in UniSwap v3, you should read the v3 whitepaper. 

To summarize, v3 introduces concentrated liquidity, which improves the capital efficiency of a typical CPMM and can help reduce the risk of IL, if used appropriately. However, it is a double edged sword, and can magnify the IL instead. It also allows new ways of interacting with a CPMM such as range orders. However, buy-stop orders and stop-loss orders can not be realized. There is much more to be said about UniSwap v3 and concentrated liquidity, especially from an LP’s perspective. Because LP positions can be customized with different concentrated liquidity positions, this opens up new ways to hedge against IL and maximize yield. While we will not include it here, Leo Lau has written another spectacular article featuring mathematical insights into UniSwap v3.



Balancer builds upon UniSwap’s CPMM model and enables multi-token pools with more than two assets. The sum of the invariant weight of each asset in Balancer’s multi-asset pools equals one. This property was also true for UniSwap’s liquidity pools, however the invariant weight for each asset was 1/2. This model can calculate the price of one asset relative to another asset within the same pool by calculating the ratio of the reserve number of asset A and asset B, normalized by their invariant weights.  The image below shows this principle for those mathematically inclined who wish to follow along.


Based on the constant invariant, we can derive trading formulas with different inputs (trading between asset o and asset i). Asset o in this standard of notation always is the asset bought out. Asset i is the asset sent in. A and B are the token sent in / received and the current token reserve number. We can also calculate the token i sent in or token o bought out, given how the price changes.


Balancer also introduced an innovative Smart Order Router (SOR) algorithm.


Like most SOR algorithms, Balancer’s algorithm broke down the whole trade into smaller trades to be executed in different Balancer pools to achieve a better overall swap price, by reducing the market impact the trade has in any single pool. From Leo Lau:

Suppose we want to trade in pool 1 and pool 2. If the total amount N we want to trade is below A in the above figure, we will only trade in pool 1, as the price in pool 1 is always better than the price in pool 2. If the total amount exceeds A, we will trade part of the order in pool 1 and part in pool 2. The amount traded in each pool will bring the price in each pool equal (B + C = N).

The price function, with respect to trade amount, in general, is a nonlinear function. Balancer simplifies the price function as a linear function. If there are n pools, the optimal strategy can be expressed as:

If there exists a price function such that when swapping all the tokens in its corresponding pool can not bring the price equal to all other price functions’ initial values before swapping, then the trivial, optimal strategy will be swapping all the tokens in that pool. Before doing the more complicated calculation, we need to first determine whether this condition is satisfied. If only some price functions’ initial values can not be matched, then only those price functions should be removed from calculation.

As Leo Lau notes, these calculations do not take into account Ethereum gas fees, which should not be treated as negligible. The optimal strategy should keep a balance between the route gain and the gas fee loss.

There is no reason (besides the obvious business reason) that the SOR algorithm should be limited to only Balancer pools. Including the pools and price functions of other AMMs will enable even better pricing on larger trades. 

While Balancer expands available liquidity for traders by enabling multi-token pools, and achieves better swap prices with their SOR algorithm, a liquidity pool is “only as strong as its weakest asset”. Increasing the number of tokens will always increase the risk. 


Despite its popularity, there are many drawbacks to UniSwap’s AMM model. One of UniSwap v2’s biggest complaints was its capital inefficiencies for stablecoin swaps. In these situations, a constant product model is unsuitable. A better model would be a constant sum formula - x + y = k. For example, in a USDC-USDT pool with a k value of 20,000, a deposit of 500 USDC would result in an output of 500 USDT (minus fees) to ensure the k value stays at 20000. A viable AMM cannot solely rely on this constant sum formula because it is possible for it to be drained completely of one token. Curve’s stableswap model takes inspiration from both the constant sum and constant product models and introduces a variable into the constant sum formula that dynamically changes according to the reserve values. This results in a pool that operates like a constant sum market maker when the price is close to $1, and a constant product market maker when the price begins to drift away from that peg - as the price drifts away it becomes more expensive to withdraw the scarcer asset.


The idea behind Curve v1, or StableSwap is explained well by Leo Lau below.

First we consider a special case, where the number of each token in the liquidity pool is the same. It is trivial to show the equation at equilibrium holds (χ is the weight, Dⁿ⁻¹ is multiplied to make the CSMM and CPMM have the same order of magnitude). However, when the liquidity pool is out of equilibrium, if χ is a constant number, the equation will no longer hold. Therefore, we need to make χ dynamic. Curve V1 chooses a functional form of χ that when at extreme imbalance, goes to zero, meaning the equation is dominated by CPMM. At equilibrium, χ equals to A. A is a constant number, optimized by simulating historic data. Substituting χ gives us an equation that holds all the time.

Next, let us derive how StableSwap actually calculates swap outcomes. Based on the current token numbers in the pool, we can calculate D. For instance, if we want to swap for token j, we can separate xⱼ and solve the equation for xⱼ:


The equation can be reduced to a quadratic form. Sadly, there is no math library to solve quadratic equations right now in Vyper. Thus StableSwap implements Newton’s method to solve for xⱼ. The iteration formula doubles its precision every iteration. Therefore, an acceptable xⱼ can be calculated within set gas limit. Finally, the difference between xⱼ after and before swap will be the amount of token j bought out.


From StableSwap whitepaper


From StableSwap whitepaper

The StableSwap market maker, compared to CPMM, is pressed, flattened against x + y = const. This ensures the swap price at close or equal to 1 with very small slippage in the vicinity of the equilibrium point (when one token in the pool is not close to be almost sold out). When one token in the pool is almost sold out, the price starts dropping drastically. This is easy to understand: the curvature/slippage of the function is concentrated/pushed elsewhere to ensure small slippage near the equilibrium.

The CPMM and the dynamic weight in this model are used to punish informed extremely large orders, preventing tokens in the pool to be completely sold out.

Leo Lau has done a fantastic job explaining the Curve market maker model. This model’s strengths are apparent when swapping stable coins, as it can achieve very low slippage as is evidenced in the plot above. This however is a double edged sword, because it limits StableSwap to only stablecoins. 


Curve v2 was introduced to allow customizable price pegs by changing the dynamic weight χ to K. Internally, Curve calculates price to peg the assets to for each liquidity pool based on the trading within the pool using a running exponential moving average, and concentrates liquidity at that peg price. Additionally, Curve v2 trading fees are dynamic, and adjust based on the volatility of the pool. In periods of low volatility, the fees are lower (~0.05%) and in times of higher volatility the fees are higher.


Jumping back to Leo Lau:


K0 varies between 0 (imbalance) and 1 (equilibrium), χ and K (normalized by A) as functions of K0 are plotted below:


We can get a grasp of how Curve V2 smooths the price transition from the figure above. Basically it makes the dynamic weight quickly decline, when moving away from the equilibrium. The lower γ is, the more rapid the decline is. Making the dynamic weight quickly decline to zero essentially is equivalent to enforcing the function to behave much more like CPMM, even [if] the pool is only a little bit imbalanced.


There is an awesome tweet by DW on twitter that explains the same concept.


The price transition problem is solved. Now we discuss how Curve V2 implements other price pegs rather than 1. Having a price peg (they call it price scale in the whitepaper) means there exists a[n] equilibrium point on the market maker curve where the scaled token numbers are equal:


A plot of this function with typical values is shown The scaled token numbers satisfy a similar equation as StableSwap. Take the simplest 2-token pool for instance, the market maker function can be expressed in terms of A, γ, p, D, x, y. The function can be simplified to a cubic function with respect to x, y (a sextic function with respect to D).:


A plot of this function with typical values is shown below:


The price of token x relative to token y can also be plotted. There is a constant part of the Curve V2 price function near the equilibrium point (1000, 1000). Curve V2 delays the price movement slightly, instead of completely compared to StableSwap. As the trading amount increases, the price starts to react at smaller slippage, compared to CPMM. To summarize, Curve V2 achieves very small slippage near the equilibrium point and better slippage than CPMM in other region. As for other price pegs rather than 1, we simply changes p in the cubic / sextic equation above. Therefore, the price peg problem is also solved.

We can use a similar Newton’s method in StableSwap to calculate swap results. First, we calculate D based on the current token numbers in the pool (this time using Newton’s method since the equation is way more complicated). Second, if we want to swap for token i, we use Newton’s method again to solve for xᵢ. Again, the difference (normalized by its price scale) will be the amount of token i bought out (all the xᵢ are scaled token numbers).

To ensure the roots of the polynomial function can be solved within set gas limit, the Curve whitepaper discusses the starting guesses they choose, as well as the parameters in the function. They use a method called fuzzing (hypothesis framework) to determine those optimal values. Currently, we do not know any detail about this method and would love to learn more.

In order to ensure small slippage (trading near the equilibrium point), Curve V2 constantly repegs the market maker function, by changing the price scale. However, repegging could lead to value losses endured by liquidity providers. Curve V2 introduces a variable called Xcp to mitigate this problem:


If the loss after one repeg is larger than half the Xcp accumulated (value gains from original Xcp), the algorithm will keep the market maker function the same. There are several questions about this we would like to answer in the future, since the whitepaper only briefly discusses Xcp. A look at its source code may help.

Does the Xcp value proportional to the value calculated using current token numbers in the pool?
Does depositing or withdrawing liquidity count towards Xcp?

If withdrawing liquidity counts towards Xcp, will it be stopped if the decrease in Xcp is too large?

For repegging, Curve V2 uses EMA (Exponential Moving Average) price oracle to determine the oracle price. The new oracle price vector is determined by a linear combination of the last swap price vector and the previous oracle price vector. The new price scale vector changes in a similar direction as the oracle price, but not completely equal to the new oracle price. They lag the price scale vector behind the oracle price by introducing the relative price change step size s. The equation can be easily derived using Euclidean geometry. The EMA price oracle and the price scale delay are here to reduce the effect of volatile recent price movements and better represent the long-term market price.


Regarding the relative price change step size s, based on our “refreshing Curve finance webpage” experience, s changes on the scale of at least tens of minutes for some pools. How Curve V2 updates s is an interesting question that is out of the scope of our current knowledge. Looking at its source code will help as well.

A plot demonstrating one single repegging process is shown below:


Suppose we start our swap at x = 1000 and end our swap at x = 1400. Originally, the price is pegged at 1. After the swap, the price moves to 0.6. To simplify and only for demonstration purposes, we set the new price scale equal to the spot price (price is now pegged at 0.6), and solve the sextic equation to get D. Now the market maker function is pegged at 0.6 as shown above.

Repegging is essentially equivalent to finding a new market maker function that goes through the current token numbers point ((x, y) in the 2-token pool case), with a equilibrium point at (x0, y0) such that y0/x0 is equal to the absolute value of the derivative at (x0, y0). A fun project would be fetching real Curve finance pool parameters to make a better demonstration (possibly an animation) of the repegging process.

Due to the market maker feature of Curve V2 discussed above, it is sensible to make the transaction fee a linear combination of 2 tiers of transaction fees with dynamic weights, measuring how far away we are from the equilibrium point (whether the current price movement is more like StableSwap or CPMM). The fmid and fout value chosen by Curve V2 are 0.04% and 0.4%. A figure demonstrating how the fee changes in a 2-token pool is plotted below (assuming no repegging or liquidity change):

Price Range in Curve

The following paragraph is another extract from Leo Lau’s article, briefly discussing the price range with respect to Curve v2.

We can apply the price range concept to Curve V2. Since there is no analytical expression of the price with respect to the token numbers in the pool, we need to interpolate the relation between the price and token numbers. The amount of shift applied to the market maker function is determined by the price range. Writing such [a] program could make the capital efficiency even higher.

Once again, Leo Lau does a fantastic job covering the mathematical aspects of Curve’s market maker models. Curve v2 improves upon its first iteration by enabling the swapping of other tokens and not just stablecoins. It also has a much smoother price transition. Unlike CPMM as seen in UniSwap, it uses an internal price oracle to incorporate the market price. However, because it uses complex calculations to achieve this, the gas fees users pay are typically higher, and repegging can be risky when only relying on a single, internal oracle. 

Both UniSwap v3 and Curve v2 feature a form of concentrated liquidity, however they differ in how it is implemented, and the input individual LPs have in the price range of concentration. As discussed above, UniSwap v3 offers LPs greater flexibility to determine their own price range. In Curve v2 however, this flexibility does not exist, and LPs are restricted to concentrating their liquidity where the internal price oracle pegs the price. That is not to say Curve is any less complex; it features nine customizable parameters that change internal mechanisms such as bonding curve and price scaling. Customization of these parameters can be abstracted though, allowing for an easier LP experience.

Price Range in Curve

Single-Sided Liquidity

Single-sided liquidity has already been discussed in this article. It is a lucrative concept for LPs and therefore it is a priority of AMM developers to incorporate such functionality into their platform. Generally, there are two approaches to providing single-sided liquidity; the first is an obvious solution, where half of the tokens deposited are swapped automatically to the other token in the pair, and second, which is discussed in the Balancer and Curve whitepapers, is to deposit the tokens regardless that it is only one half of the swap pair, and let the pool rebalance itself. I wanted to include Leo Lau’s discussion on this concept, because his mathematical breakdown provides meaningful insight.

By intuition, there are two solutions: 1. swap part of the tokens first using the same protocol 2. deposit single-sided liquidity regardless and let arbitrage bring the price back to the market price.


For instance, we want to deposit liquidity in a 2-token pool with equal value.

Single-Sided Liquidity

We only have token x. It is not hard to calculate how much we need to swap so that the value of each token is equal after the swap. It is also easy to show that β is always between 0 and 1, meaning a reasonable result. However, the price after the swap can be different from the price when depositing liquidity. Therefore I wonder if protocols actually make the swap and liquidity deposition as one atomic operation. There is also price slippage when doing the swap. How protocols like Balancer and Curve handle single-sided liquidity deposition remains a question to us as of right now. It makes sense to do the operation described above if the slippage is small.

The second approach as described in the Balancer and Curve whitepaper, is to deposit regardless. This could alter the price quite a bit. The resulting arbitrage may make the impermanent loss significant too. We personally do not see any counter measurement in the Balancer whitepaper and docs. Curve, on the other hand, introduces something called imbalance fee which ranges from 0% to 0.02%, when depositing single-sided liquidity. In reality, there is no real incentive for depositing single-sided liquidity under the second approach, due to arbitrage and impermanent loss.
It would be interesting to learn more about other innovations related to single-sided liquidity.

γ Value

In Curve V2, there is a constant called γ. What would happen if we make it dynamic as well? For example, we can make it a function of K0. The simplest case would be making it equal to K0. The motivation here is to make the function behave even more like StableSwap when it is close to the equilibrium and even more like CPMM when it is far away.


The purple dashed curve, which is between the StableSwap and small γ curve, should give us a market maker function in between StableSwap and Curve V2. However, when we plot the market maker function, it behaves exactly like StableSwap:


There are two solutions to this problem: 1. make A smaller 2. choose a higher power number of K0 to represent γ. Both seems viable, however, 1. ruins the the purpose of A being a big number: to make the market maker function pegged to a price. Further testing done by us seems to show changing A would not make a difference in the functional behavior (the market maker function still looks like StableSwap after changing A).

The second solution would make the gas fee higher. A higher power number of K0 corresponds to a higher order polynomial equation we need to solve. In fact, the reason Curve V2 chooses that particular form of dynamic weight K is to mimic the behavior of the function of K0 to a large power, while not making the order of the polynomial higher.

The interesting question here is: can we find a better dynamic weight that simplifies the equation we need to solve while maintaining the same or better functionalities of Curve V2? When designing such dynamic weight, we also have to keep in mind that we need to keep a balance between small slippage and the capability of the market maker function reacting to informed large orders. Clearly StableSwap with only a price peg will not work in this regard, because almost all the tokens will be bought out if the pegged price is different from the market price. Only when the balance is maintained can repegging be viable.

DEX Aggregators

DEX aggregators are an increasingly popular tool in defi. Their goal is to achieve a better swap price (by reducing slippage) for end users by breaking up their order and spreading it across a collection of liquidity pools across multiple DEXs. Balancer’s SOR algorithm is a version of this, but limited to the liquidity pools in Balancer only. Leo Lau dives into this tool, and discusses how a DEX aggregator can be price function agnostic. 

The general solution of Balancer’s SOR algorithm, without any price function approximation, can be expressed below:

DEX Aggregators
Pivot Algorithm

Because price functions could be any form, depending on the AMM algorithms they are generated from. This means equations that satisfy conditions like total token number conservation and equal final price, may not have a analytical solution.

Therefore, we introduce a technique that is commonly used in fields like machine learning called gradient descent. We define the loss function as the variance of the values of different price functions. After choosing a starting guess (a trivial, uninformed guess would be equal swap amount N/n in each pool), we can iterate (changing each swap amount by the partial derivative of the loss function with respect to that variable, multiplied by the learning rate l) to get an optimal result, with set error tolerance.

Since the total trade amount as a function of the final equal price is monotonic, this method should be able to find the global minimum (variance = 0). Again, calculations above assume there is no trivial solution (there does not exist a price function such that when swapping all the tokens in its corresponding pool can not bring the price equal to all other price functions’ initial values before swapping).

Pivot Algorithm

The Pivot algorithm tries to pivot the market maker function by making it go through a fixed point (x0, y0).


The price at (x0, y0) will always be the current market price Pt by design. This, in concept, ensures that arbitrage will always bring the pool back to point (x0, y0). The impermanent loss will be zero because of this feature. However, in reality, this algorithm does not have enough parameters to fit both the current reserve (x, y) and (x0, y0). This means we have to wait for the pool to go back to (x0, y0) and then change the market maker function.


As we can see from the above figure, the after swap point is not on the new market maker function (blue and dashed-blue curves). The pool may not have any incentive to go back to (x0, y0) either, if the current market price is smaller than the spot price at current reserves.

We wonder if there exists such function that goes through both (x, y) and (x0, y0) with tunable derivatives at (x0, y0) to fit the market price. If we assume the function to be convex, then the market price can not be smaller than the linear segment slope between those two points. Thus, there might not be a complete solution to this problem if the market maker function has to be convex.


Concentrated Liquidity Managers

Another tool that has emerged as a result of liquidity concentration in newer DEXs is the concentrated liquidity manager (CLM). The value proposition of these tools is that they simplify the LP experience when interacting with DEXs which allow concentrated liquidity. As was discussed above, platforms like Curve v2 and UniSwap v3 have become increasingly complex now that liquidity concentration is possible, and this makes the role of LP more difficult. Concentrating liquidity incorrectly can have significant consequences on an LPs position, whether it be excessive IL, loss of fee-earning potential, or other risks. CLMs offer users a platform that can take this stress and complexity out of the LP experience and ensure the yields are maximized. Some of the benefits of CLMs are:


  • Automatic compounding

  • Reduced gas fees when rebalancing your position

  • Automatic price range adjustment

  • Optimal price range discovery

    • For fee generation and/or IL minimization

The CLM service is provided through vaults. An LP’s position is represented with ERC-20 tokens. This functionality - fungible LP positions - is not a foreign concept in defi, and is often used to leverage LP positions in yield farming. However, UniSwap v3 LP provisioning did away with this concept, and so CLM LP positions are a possible work around. 

The methodology used to rebalance LP positions differ between CLMs. Some protocols, like Charm Finance or Sorbet Finance, use an automated strategy where Keeper bots monitor the pool and rebalance when required. Other protocols maintain this process off-chain and use proprietary methods. Lastly, other protocols like Steer Finance and Sommelier Finance are letting their users decide by allowing them to create custom strategies.  

Overview of Concentrated Liquidity Managers
Concentrated Liquidity Managers

It remains uncertain whether CLMs are effective at maximizing an LPs yield. They are also at risk of being front-run where their rebalancing is published publicly. Existing research suggests that active liquidity management does not have higher returns than passive liquidity provisioning. However, CLMs are still in their infancy, and new innovations may minimize or avoid this risk altogether. 


Just In Time Liquidity

A growing LP trend, and not for the best, is just-in-time (JIT) liquidity. JIT liquidity is where an LP monitors the mempool for pending trades in a liquidity pool. The bot will then add liquidity to the liquidity pool that is focused narrowly around that trade, capturing the majority of the fees from that trade, and then withdraw that liquidity. This action typically happens only within the span of one block. By doing this, the LP can capture most of the fees, and avoid the risk of IL. For a good explanation, and example walkthrough of what this looks like, AMBER have written an excellent article on this.

Proponents argue that this form of liquidity provisioning improves the user experience from a trader’s perspective, who is using the pool to swap. This is because it significantly reduces the price impact of their trade. The downside to this form of liquidity provisioning is that it cuts out the rewards for other passive liquidity providers and increases their exposure to IL. Furthermore, if this type of liquidity provisioning increases in popularity, it could make passive liquidity provisioning obsolete, and turn swapping into a request-for-quote type market - a dire outcome for defi. 


AMM challenges


Front-running is one of, if not the biggest challenge facing AMM developers and the defi community. It is where a user sees a pending transaction, and places bids on them to drive up the price to then immediately sell back and earn a profit. The front-runner “sandwiches” the original buyer with their new bid and is able to extract value from the transaction at the cost of the original buyer. This is possible because of the public nature of blockchains. 

While not directly responsible, miners are catalysts in this activity  and they do not have strong enough incentives to prohibit this conduct. “Validators may not have sufficiently strong incentives to monitor private pools because this reduces their MEV, so the execution risk for users who join these private pools goes up,” remarked Agostino Capponi, an associate professor of industrial engineering and operations research at Columbia University. Private pools were one solution to this issue, however it has been shown that this does not work, prompting calls for other solutions. 

One solution is to promote privacy through the use of zero knowledge proofs. By bundling transactions into a rollup and proved using ZK snarks, individual orders can be hidden from front-running bots. This solution has its own intricacies and problems, however they go well beyond the scope of this article. 


Challenging Business Models

While handling billions in trading volume, and having even more billions in TVL, AMMs seem to be a lucrative business on the face of it. However, this is not the case. Valuations are often heavily reliant on expected future growth, and any revenue relies solely on liquidity providers, which are a separate entity, and not the AMMs themselves. Currently, AMMs rely on their own token value. However a tokens value is ultimately related to the value of the community it represents. 

While some may make this a criticism of DEXs and Defi, others see it from a different lens. These protocols offer a valuable service, and create real value, despite revenue not being generated for the platform itself. The jury is clearly still out on the issue of profitability and sustainability of AMMs, with many outspoken critics on either side. 


Other Innovations in the AMM Space

Having played a pivotal role in the rise of decentralized finance, this section discusses the future of AMMs, and the innovations that are being introduced. 

Bringing HFT to Defi

One issue with trading with existing AMMs is the high gas fees. This is caused by the separation of liquidity pools into their own smart contract. CrocSwap is a new DEX with a constant function market maker model which aims to solve this issue by maintaining all liquidity pools in a single smart contract. Each individual pool is represented as a lightweight data structure within the smart contract. This design will pave the way for higher frequency trading, by allowing multi-step trades across multiple pools within the same transaction. 

CrocSwap provides a similar LP experience to UniSwap v3, with both ambient LP and concentrated LP positions. The CrocSwap architecture makes the experience for both LPs and swappers better by enabling liquidity querying of both types in a single contract call. CrocSwap also features automatic fee compounding. 

Crocswap recognizes the issue of JIT liquidity, and will implement deposit and withdrawal time limits. However, it is possible for market participants to be whitelisted to allow them to provide JIT liquidity. 



Equality Among Stablecoins

While Curve has been a revolutionary platform for the swapping of like-priced assets, such as stablecoins, it still has its drawbacks. Platypus promises to make this experience better, offering a platform where stablecoins can be swapped efficiently. It features a single liquidity pool where all assets are stored. The AMM model builds upon the Curve v1 model, but offers a larger flatter region in the AMM curve. 

Just In Time Liquidity
AMM challenges
Challenging Business Models
Other Innovations in the AMM Space
Bringing HFT to Defi
Equality Among Stablecoins

Source: Platypus. Green line is Platypus AMM invariant.


Oracle-Assisted Pricing

One issue that has been discussed earlier in the paper is the reactive nature of AMMs. Traditional market makers can operate proactively, which significantly helps them hedge against risk and be profitable. Because AMMs are always “lagging” behind the market direction, they are constantly receiving “toxic flow” - arbitrage trading from more informed parties. The consequence of this is IL, and it is a risk that LPs are being burdened with. To fix this issue, some protocols are implementing price oracles that provide an external market price, helping the AMM to be more proactive.



Dodo is a new DEX that utilizes a proactive market maker model. The following is an extract from Dodo’s documentation that describes the purpose of the PMM and illustrates this with an example.

PMM is an inventory management strategy. When the quantity of an asset becomes low, the PMM algorithm automatically increases the price quoted for this asset in anticipation of buying back the missing inventory from the market.

A simple example

Next we will use a simple example to illustrate how PMM works. (The following example numbers are not exact, but are just to help you understand how the algorithm works)

Peter's boss gives him $100 and 10 apples. There are people in the market who buy and sell apples, and Peter has to satisfy these people's buying and selling needs with the inventory he has on hand. In more technical terms, this is "providing liquidity to the apple market".

The owner tells Peter that the apples are about $10 each and then goes home to rest, leaving Peter to wait for the users above in the market. Someone buys an apple from Peter, Peter adds a little price to it and sells it to him for $12. The $2 is the so-called "slippage". At this point Peter's inventory is $112 and 9 apples. That is + $12 - 1 apple.

Peter automatically places a pending order at $11 (slightly above market price), expecting to buy back an apple as soon as possible to make up the shortfall. Soon Peter buys back the apple for $11, leaving Peter with $101 and 10 apples in stock, or +$1 and 0 apples. Although it cost $1 more, the extra $2 received through the slippage was enough to make up for it, and Peter helped his boss make a net profit of $1.

The PMM algorithm is the Peter of the above behavior, and you are Peter's boss. what Peter does in a nutshell is to provide liquidity while maintaining a healthy inventory by actively adjusting prices.
To avoid front running, Dodo sets the trading fees between 0.3% and 0.5% for volatile pairs. This make’s front running profitable only when there is a change in market price of greater than 0.6-1.0% (the cost incurred by fees x2 for both buying and selling). Below are the results of backtesting by the Dodo team. These results indicate that this design limits such losses being incurred to the most volatile of environments. 

Oracle-Assisted Pricing

Source: Dodo

The Dodo platform also boasts single-sided liquidity provisioning. This leads Dodo to claim that LPs can suffer no IL. Dodos design and benefits do not come at a cost, and they fail to remove all risk that LPs face. Instead of IL, the risk facing a proactive market maker is one of inventory risk. This type of risk is something traditional market makers must manage, and it is where the market moves against one asset in a pair, and the market maker ends up having an oversupply of that asset. Dodo’s design so far fails to mitigate this risk, hence the majority of its TVL being in stablecoins. 

Percentage of TVL Between USD Stableswaps


                                                               Source: Dodo, Uniswap as of 29 March 2022


Lifinity is another PMM living in the Solana ecosystem. It gets its oracle pricing through the Pyth Network. Solana’s business design, particularly the absence of a fee market (although this is set to change), combined with Pyth’s data feeds, prevents front-running on this platform. Access to the Lifinity platform is currently restricted for LPs but beta testing is proving positive. 



Clipper is a DEX designed for retail investors, claiming to be the DEX for “self-made crypto traders” who are executing trades <$10,000. The model it implements is a combination of the CPMM and CSMM models. The following is a breakdown of Clipper’s AMM from Leo Lau’s article.
Clipper uses an AMM algorithm that best suits the need of small trades. It generalizes Constant Product Market Maker (CPMM) and Constant Sum Market Maker (CSMM) as its two extreme cases (k = 1 and k = 0).


When there are only 2 types of tokens (X and Y), the invariant can be reduced to a simpler form where x0 and y0 are the token numbers set by the initial liquidity provider. Below is how the pool behaves under different k values. The x and y-axis are normalized by x0 and y0.


Smaller k values correspond to lower slippage (the function is less convex) in the vicinity of (1, 1). When k is between 0 and 1, the invariant function could intersect with x and y-axis. This implies tokens in the pool could be sold out. The price at such intersection is zero, implying the price is better than CPMM price until a turning point. After passing the turning point, the CPMM price is better. This can be illustrated in the figure below:


Again x-axis is normalized. The price of X token relative to Y decreases as we move away from the initial point (1, 1). We can precisely calculate where the intersection happens:


Pros: By introducing k, Clipper achieves lower slippage (better price) when trading quantity is small. The following chart from the Clipper whitepaper further demonstrates this point.


From Clipper whitepaper

Cons: When trading quantity passes a certain threshold, the price will become significantly worse than CPMM.

In order to guarantee better price, the algorithm has to constantly re-peg (changing x0 and y0) to keep the current pool reserve near the (1, 1) point. It could use the same mechanism Curve uses. The algorithm re-pegs by following its internal price oracle, which tracks the market price. Essentially, this is equivalent to solve the following formula, but this time x, y are known. P is given by the price oracle. Finally solving this equation for x0 gives us the new equilibrium point.


This ensures we always trade close to the market price with smaller slippage. Currently we have not investigated whether Clipper implements this or not, as this is not explained in the Clipper whitepaper. A further look at its source code is needed.

The price range concept can also be applied to Clipper:

Time-Weighted Automated Market Maker

Time-weighted automated market makers (TWAMM) is a recent innovation in the AMM space that boasts the ability to trade in both directions at the same time. It does this by converting a long-term order into an infinite array of smaller orders. These are orders that can trade either way simultaneously. Orders in the same direction occurring at the same time can be aggregated to simplify the calculation. The result of this is that the overall long-term order is executed at the time-weighted price over that time period. This may sound confusing, and it is, so I will let Leo Lau describe the algorithm. 

As of right now, there only exist closed-form TWAMM solutions for two types of AMMs, CPMM and LMSR (Logarithmic Market Scoring Rule).

Let us consider the general case where during a period of time, the total sale of token X is xin and the total sale of Y is yin. The selling rate of X is f(t) and the selling rate of Y is g(t). The net change to the number of token X from time t to t + dt can be calculated as the sale number of token X, subtracted by the number of token X bought during this period, with the exchange rate at dy/dx. Because the selling quantity of token Y is infinitely small during this period, the spot price can be used as the actual exchange rate.

Time-Weighted Automated Market Maker

Thus, we arrive at a nonlinear first order differential equation. Depending on the form of dy/dx, f(t) and g(t), the equation may or may not have a closed-form solution.

When applied to CPMM, the equation can be integrated, if f(t)/g(t) is a constant, meaning the selling strategies of token X and Y are the same. We can further simplify the expression:


There is an analytical expression of the integral. Using properties of the hyperbolic functions, we can get a nice looking final solution (the token X number in the pool after trading), which only depends on the original position of the pool (x0, y0) and xin, yin. The final token Y number can be expressed as well, by switching the position of xin and yin, x0 and y0 in the final expression of xend, since the market maker function of CPMM is totally symmetric with respect to x and y. The product of the token X number and token Y number is equal to k as expected.

This form of differential equation derived from CPMM, actually has a technical name called “Riccati equation”. The general form of the Riccati equation looks like:


There is no general closed-form solution to the Riccati equation. However, there are special cases where the Riccati equation can be solved. There is a paper¹⁵ discussing those cases. If the coefficients of the Riccati equation satisfy this condition:


Then the Riccati equation can be transformed into a Bernoulli type equation. The Bernoulli type equation can be solved quite easily. This should give us the same result as before. As we can see from above, satisfying this condition is the same as keeping f(t)/g(t) constant, which we assumed in the first method to solve the differential equation.

When f(t)/g(t) is not a constant, what forms of f(t) and g(t) we can choose to make the differential equation have closed-form solutions is still an open question. Finding such solutions will give us more options (the selling strategies of token X and Y do not have to be the same).

Now let us apply TWAMM to LMSR:


Again, we assume the selling strategies are the same. Then the differential equation can be integrated. We can further simplify the final token X and Y number expression as:


Similarly, the differential equation is not guaranteed to have closed-form solutions when the selling strategies are different.

Once we obtain xend and yend, we can calculate how much of the token X and token Y each side will receive:


Since during this period, all the orders in the same trading direction are pooled together. Each individual trader will get his fair share of the token based on the percentage he contributes to xin and yin.

Pros: TWAMM makes the price slippage for large orders smaller, by allowing counter-parties to trade against those large orders simultaneously. In the most ideal case (xin/yin = x0/y0), zero slippage trading can be achieved. In this case, xend = x0, yend = y0, TWAMM basically serves as an order book, which exchanges tokens between each side without providing liquidity. Long-term orders are broken into infinitely small orders which are executed virtually between blocks. Due to this nature, it is less susceptible to sandwich attacks since the attacker has to put an order at the end of a block, and another order at the beginning of the following block.

Cons: The gas fee could be very high if we allow the orders to expire at any time. This is due to the fact that we have to calculate the integral results (in the paper they call this “lazy evaluation”) multiple times. In the worst scenario, we have to calculate results for every single block. Therefore, in practice we have to make the orders expire at certain blocks to simplify the calculation. Besides, the liquidity pools TWAMM uses have to be different from the existing liquidity pools since there is no concept of virtual orders and lazy evaluations. The regular traders do not want to pay for the extra gas fee incurred by lazy evaluations when they interact with TWAMM (the pool is updated whenever someone interacts with it).

We can also apply TWAMM to time-dependent AMMs such as YieldSpace¹⁶:


There are two forms of the market making function, both of which lead to differential equations that currently we do not know how to solve. The differential equations can be reduced to a single differential equation in the second form case.

Summary of AMM Innovations

The trend among new versions of current AMMS, and new AMMs entirely is the protection of LPs and the improvement of the LP experience. There are a variety of ways of doing this, whether it be upgrading the platform to attract more users, bringing more fees with it, or redesigning the AMM to avoid IL or other LP risks. There is also the focus on improving the trading experience, to make DEXs more competitive with centralized exchanges. This will include better liquidity - brought about by effective concentrated liquidity regimes - and better functionality such as HFT. Furthermore, the issue of JIT and frontrunning remains at the forefront of developers minds, and these issues may inhibit a wider adoption of AMMs and DEXs.






  5. Bancor Protocol Continuous Liquidity for Cryptographic Tokens through their Smart Contracts -

  6. Formulas for Bancor system -

  7. Uniswap V2 Core -

  8. Uniswap’s Financial Alchemy -

  9. SoK: Decentralized Exchanges (DEX) with Automated Market Maker (AMM) protocols -

  10. Uniswap V3 Core -

  11. Uniswap V3: The Universal AMM -

  12. A non-custodial portfolio manager, liquidity provider, and price sensor -

  13. Smart Order Router V2 -

  14. StableSwap — efficient mechanism for Stablecoin liquidity -

  15. Automatic market-making with dynamic peg -

  16. New Invariants for Automated Market Making -

  17. TWAMM -

  18. Analytical solutions of the Riccati equation with coefficients satisfying integral or differential conditions with arbitrary functions -

  19. YieldSpace: An Automated Liquidity Provider for Fixed Yield Tokens -

  20. Improved Price Oracles: Constant Function Market Makers -

  21. A Mathematical View of Automated Market Maker (AMM) Algorithms and Its Future -

  22. Impermanent Loss in Uniswap V3 -

  23. Calculating the Expected Value of the Impermanent Loss in Uniswap -

  24. Uniswap’s Financial Alchemy -

  25. A Guide for Choosing Optimal Uniswap V3 LP Positions, Part 1 -

  26. A Guide for Choosing Optimal Uniswap V3 LP Positions, Part 2 -

  27. Uniswap V3: Liquidity Providing 101






bottom of page