Synchronous composability allows assets to be transferred seamlessly between chains by ensuring instant cross-chain transaction finality. This enables shared liquidity pools.
Zk-rollups support synchronous composability via instant validity proofs that prove correctness of state transitions. This allows shared liquidity between zk-rollups.
Optimistic rollups lack synchronous composability due to delayed fraud proof settlement. They cannot securely enable shared liquidity pools.
Cross-rollup transactions require simultaneous atomically settled execution across multiple rollups, introducing risks and complexity compared to single-rollup transactions.
Innovations like recursive SNARKs could help address cross-rollup costs and inefficiencies to make large transactions more viable.
Shared sequencing models that blend decentralization with temporary outsourcing to randomized sequencers show promise for cross-rollup composability.
Furthermore, I experimented with a liquidity provider utility maximization model based on allocating tokens between isolated pools and zk-rollup shared pools.
Providers face tradeoffs between returns and liquidation risks. Two types were modeled - risk-averse and return-driven.
Shared pools offer composability returns but also reduced liquidation risks.
Moral hazard constraints were incorporated to capture the potential for diversion.
Comparative statics showed how parameters like composability benefits and liquidity risks impact optimal allocations.
Insights on conditions that incentivize shared vs. isolated pools based on heterogeneity in risk preferences and market conditions.
Introduction
Shared Liquidity and Synchronous Composability
What is Shared Liquidity?
Why Synchronous Composability Matters
Rollup Architectures
Zk-Rollups Enable Shared Liquidity
Limitations of Optimistic Rollups
Challenges of Cross-Rollup Transactions
Risks of Sequential Execution
Cost Considerations
Recursive SNARKs to Optimize Costs
Sequencing Models
Exploring Liquidity Provider Utility Maximization Model
Formulating a Shared Liquidity Model
Walkthrough of Model Components
Thoughts
Appendix - Formulas and Examples
Shared liquidity gives the ability to transfer assets and value seamlessly between different blockchains without the need for intermediaries or centralized exchanges.
To achieve shared liquidity, crosschain protocols must be able to support atomic swaps and other types of crosschain transactions that allow assets to be transferred between blockchains in a trustless and efficient manner.
Synchronous composability is a key requirement for these types of transactions, as it ensures that state changes on one blockchain are immediately reflected on the other blockchain, allowing assets to be transferred seamlessly and without delay.
Without synchronous composability, crosschain transactions may suffer from long wait times and increased transaction costs, which can limit the types of assets that can be transferred and the efficiency of the overall system. This can, in turn, limit the potential for shared liquidity across multiple blockchains.
For a shared liquidity pool to function efficiently across multiple rollup networks, it is critical that the different networks can "talk" to each other seamlessly. Specifically, when a swap or deposit/withdrawal transaction occurs on one network, the liquidity pool contract on every other connected network must be able to immediately recognize that transaction and update its local view of the pool's balances accordingly.
This property where transactions on one rollup or network can instantly and atomically update state on other networks is called synchronous composability. It enables different liquidity pool contracts deployed across networks to always have a singular up-to-date view of the pool's liquidity.
Without synchronous composability, if a swap occurred on Rollup A, the contract on Rollup B would have no way of knowing that happened until much later. This lag in information sharing leads to potential arbitrage, inaccuracies in exchange rates, and failed transactions when later queries to Rollup B reflect outdated state.
By enabling all connected networks to react instantly to each others' transactions, synchronous composability powers seamless shared liquidity pools. This avoids delays, inconsistencies, and complex reconciliation logic. All connected contracts stay in perfect sync without centralized overseers.
In summary, synchronous composability is the crucial technical property that makes shared liquidity pools efficient and trustless across decentralized layer 2 networks. By facilitating instant inter-network communication, it lets pool contracts maintain a singular canonical state across all networks.
The key difference between zk-rollups and optimistic rollups is how they ensure the validity of transactions.
Zk-rollups use zero-knowledge proofs to cryptographically prove the validity of transactions off-chain. This allows other smart contracts to immediately accept the results of computations in a zk-rollup without having to actually observe the computation directly. This property is called synchronous composability.
Because of synchronous composability, if a liquidity pool is deployed inside a zk-rollup, any other contract on any zk-rollup or even mainnet can immediately determine the updated state of that pool after each transaction. This allows liquidity to be shared across rollups and mainnet seamlessly.
Unfortunately, optimistic rollups lack synchronous composability due to their delayed transaction settlement via fraud proofs.
Optimistic rollups rely on a delayed fraud proof system to validate transactions. When a state transition occurs on an optimistic rollup, other contracts have no way to immediately determine if that transition was valid.
Instead, there is a withdrawal delay period (typically 1-2 weeks) where the new state can be disputed via a fraud proof. If no fraud is detected after the withdrawal delay, the transition is finalized.
This means changes in an optimistic rollup's state are asynchronous - other networks only know the change is likely valid, but cannot act on it with certainty until the fraud proof window has expired.
For example, if a swap occurs in a liquidity pool on optimistic Rollup A, there is no way for a contract on Rollup B to immediately update based on that swap. It has to wait for Rollup A's fraud proof window to expire before it can trust the new state.
This asynchronous settlement precludes synchronous composability. Since transactions cannot be instantly and trustlessly committed across networks, shared liquidity pools cannot function efficiently using optimistic rollups.
So while in theory liquidity could be shared between optimistic rollups, there would need to be additional mechanisms to prevent fraud across rollups during the fraud proof window. This makes shared liquidity pools much simpler and more efficient with zk-rollups.
The zero-knowledge proofs in zk-rollups allow trustless, synchronous composability between rollups, enabling seamless shared liquidity and interoperability between zk-rollup networks.
--
This is an interesting research paper I read titled "General Purpose Atomic Crosschain Transactions," which discusses synchronous composability in the context of enabling atomic crosschain transactions between multiple blockchains.
Here's the summary of the main concepts discussed:
For crosschain function calls, a key requirement is synchronous composability - the ability for a transaction on one blockchain to immediately react to state changes on another blockchain. This allows contracts across chains to remain in sync.
It notes optimistic rollups lack synchronous composability due to their delayed fraud proof settlement system. Transactions cannot be immediately committed across networks.
In contrast, zk-rollups provide synchronous composability through instant validity proofs. State transitions can be immediately proven valid for other contracts without delay.
This composability makes sharing liquidity pools and other state across zk-rollups simple and efficient. Transactions occur atomically across networks.
The core challenge is enabling true synchronous composability across multiple zk-rollups. This requires transactions that atomically read and write state on multiple rollups simultaneously.
Zk-rollups alone don't automatically provide cross-rollup composability. But their instant proofs of validity provide a foundation for building cross-rollup protocols.
Another challenge with cross-rollup composability is sequencing transactions atomically across multiple rollup networks. While a fully centralized sequencer could coordinate this, it would compromise decentralization.
However, what if we maintain a decentralized sequencer for each rollup while occasionally outsourcing a small recent portion of sequencing work to a randomly selected shared sequencer?
This approach could work as follows:
Each zk-rollup network operates its own independent sequencer node to handle regular intra-rollup sequencing and block production. This preserves decentralized sequencing for the majority of transactions, optimizing latency and throughput.
Periodically, for example, every 10-20 seconds, the sequencers would finalize a block prefix containing the most recent 10 seconds worth of transactions on their rollup.
These block prefixes from each participating rollup would get passed to the shared random sequencer.
This temporary outsourcing hands over a small slice of each rollup's sequencing work to the shared component.
The shared sequencer is then responsible for sequencing any cross-rollup transactions that need to execute atomically across networks and settling them into a unified block suffix.
It stitches together this suffix containing cross-rollup txs with the independently produced block prefixes from each rollup.
By combining decentralized per-rollup prefixes with shared suffixes, the sequencing load is distributed optimally between decentralization for regular transactions and coordination for cross-rollup composable transactions. The shared component is minimized to just occasional suffixes rather than entire blocks.
This creative blend could solve the core technical challenge of synchronous cross-rollup sequencing while preserving the decentralization and custom optimization of independent rollup networks. The hybrid model balances the needs for coordination and decentralization through clever segmentation of sequencing duties.
Because cross-rollup transactions involve the atomic settlement of state across multiple rollups, they inherently consume sequencer resources and gas on all participating chains during execution. This means the aggregate costs across rollups will accumulate.
For example, if Rollup A charges 10 gwei per gas unit and Rollup B charges 20 gwei per gas unit, a cross-rollup transaction would need to pay ~30 gwei per unit to cover costs on both chains.
By locking multiple rollups into ordered execution, the aggregate gas cost across chains will logically be higher. This aggregated gas pricing model is sensible for aligning incentives - all rollups involved share the cost burden. However, the resulting high prices could limit cross-rollup composability to high-value transactions that justify the cost.
Potential ways to mitigate this issue:
Batching cross-rollup transactions to amortize fixed sequencing overhead costs.
Adding gas subsidy incentives for early cross-rollup transactions to jumpstart liquidity.
Designing reputation systems to minimize sequencer fees over time.
Exploring cryptographic innovations like recursive SNARKs to lower proof costs.
The added expenses of cross-rollup transactions are reasonable for aligning incentives across chains, but optimizing these costs will be crucial to make cross-rollup composability broadly accessible. Finding the right balance remains an open research challenge.
But, cross-rollup synchronous transactions would be inherently more complex and risky compared to transactions within a single rollup.
The key source of risk is the sequential execution required across multiple rollups. The transaction must be executed and committed chain-by-chain in a specific predefined order. This sequential process opens up the possibility of failure at each step before the transaction is fully finalized.
For instance, the transaction could fail on Rollup B after already settling on Rollup A. Now a coordinated rollback is required to undo the state change on Rollup A. Performing rollbacks across chains is complex, especially if different rollup types are involved.
zk-rollups can leverage validity proofs to quickly undo invalid state transitions. However optimistic rollups rely on delayed fraud proofs for settlement finality, making rollbacks more difficult. Partial reversals may also be necessary if failure occurs midway through the sequence.
Even with zk-rollups, the added risks of cross-rollup transactions will likely result in higher fees for users. The fees would need to account for factors like:
Per-rollup transaction fees to submit and verify the transaction.
Sequencer fees which may vary based on reputation and reliability.
Batch sizes due to benefits from aggregating transaction overhead costs.
Retries and complex rollback procedures in case of failures.
The end-user experience would need to abstract away this added complexity. However, the inherent risks of cross-rollup sequential settlement do necessitate additional fees.
Hence, for the rest of the research, I’m only going to focus on zk-rollups.
A reputation-based model for sequencer fees could provide incentives to maximize uptime and minimize failed executions.
In cross-rollup protocols that rely on shared sequencers, the reputation of the sequencer node is important.
Reputation could be measured based on metrics like uptime, average latency, and rate of failed transactions. This reputation score can be quantified and tracked on-chain for each registered sequencer.
When submitting a cross-rollup transaction, the user could select amongst available sequencers and pay a fee to the sequencer.
Sequencers with higher reputation scores could charge higher fees due to a greater likelihood of reliable execution.
Conversely, sequencers with poor reputation scores would only be able to charge lower fees until they improved their metrics.
This creates direct financial incentives for sequencers to maximize their uptime, minimize latency, and avoid failed executions. The reputation-weighted fees compensate sequencers proportional to their quality of service.
Periodic decentralized audits of sequencer performance could ensure accurate reputation tracking.
Another way,
Batching transactions could help amortize the fixed sequencing costs across more transactions. Larger batches would reduce per-transaction overhead.
For example, if the fixed cost to sequence a single tx is 0.01 ETH, sequencing 10 txs batched together drops the per-tx overhead to 0.001 ETH.
Larger batch sizes further improve the amortization, although very large batches could increase failure rates and sequencing times.
An optimal batch size needs to be determined to balance overhead savings vs. other drawbacks.
But overall, batching is an effective strategy to reduce the impact of unavoidable fixed costs and improve cross-rollup transaction efficiency.
And the fixed overhead costs are then amortized across the batch, reducing the per-transaction cost. There are certain overhead sequencing costs that are fixed per batch, regardless of the number of transactions. For example, the gas fees to submit the batch coordination transactions, or the fixed fee charged by the sequencer node to sequence each batch. By combining multiple transactions into a batch, the fixed cost is spread or amortized across all the transactions in the batch.
But there's a much better way.
Recursive SNARKs are an advanced technique that enables generating succinct zero-knowledge proofs in a recursive manner.
Here is how they could help optimize rollup costs:
In a standard SNARK proof system, the expensive prover computation has to be repeated each time a new proof is generated. With recursive SNARKs, the prover can reuse prior proofs to generate new proofs efficiently in a chain.
For cross-rollup transactions, this means the validity proof from Rollup A can be recursively composed into the validity proof for Rollup B. Instead of doing redundant proof generation work, prior proofs are leveraged.
This proof reuse amortizes the expensive prover computation across transactions. The marginal cost of each additional validity proof declines.
As more rollups are involved in a cross-rollup transaction, recursive SNARKs would prevent an exponential explosion in aggregate proof generation costs. The cost growth can be made sub-linear.
Recursive proof systems are an active cryptography research area.If production-ready systems are developed, they could significantly reduce the Proof of Validity costs that dominate rollup transaction fees today.
Consider a cross-rollup transaction between two zk-rollups.
zkSync estimates its Proofs of Validity cost around ~$0.34 per transfer at current ETH prices.
Using standard SNARKs:
zkSync proof cost: $0.34
Scroll proof cost: $0.44
Total cost: $0.78
With recursive SNARKs and assuming a 90% proof reuse efficiency:
zkSync proof cost: $0.34
Scroll proof cost: $0.034 (90% cheaper from reusing zkSync's proof)
Total cost: $0.374
This represents nearly 48% cost savings - from $0.78 down to $0.374.
As Ethereum L1 fees rise, these proportional savings become even more significant. Recursive proofs minimize the impact of aggregating validity proof costs across multiple rollups.
The more rollups involved, the greater the savings compared to using standard SNARK proofs for each rollup independently. This optimization unlocks large cross-rollup transactions.
Currently, most rollups use bridges to transfer data or tokens between each other, but these bridges are often slow, expensive, or limited in functionality. Asynchronous composability, where interactions occur within an unknown and unbound amount of time, is possible with expressive bridges that enable arbitrary data passing, but it does not offer the same level of seamless features that synchronous composability does.
Some possible solutions to enable synchronous composability and shared liquidity across rollups are:
Uniting rollups with shared sequencers: For example, if a sequencer operates on two ZK-rollups, it can provide synchronous composability when it produces a block on both chains at the same time. This also removes the burden for each rollup to bootstrap its own decentralized set of sequencers, which can be costly and complex.
Using cross-chain liquidity protocols: For example, Connext enables fast and cheap transfers between Ethereum and any EVM-compatible chain or layer. Another example is Hop Protocol, which enables users to swap tokens between Ethereum, Polygon, xDai, Optimism, and Arbitrum using a network of AMMs and bridges. These protocols can provide shared liquidity for applications that operate on different rollups.
Developing cross-rollup standards and frameworks: For example, Ethereum on optimistic rollups provides a standard that aims to provide a common interface and functionality for developers and users. zkPorter is a new L2 scaling technique that combines zkRollup and sharding in a highly scalable yet atomically composable blockchain network. These standards and techniques can facilitate cross-rollup development and innovation, but they are still in progress and have not been widely adopted or tested yet.
NOTE: I will dive deep into the valuation and modeling around the TAM in detail later on, as it is difficult to navigate the flow of money from one rollup to another. I will update the research as and when it happens.
Some of the market size assumptions if you want to explore further:
The current zk-rollup ecosystem is approximated by the Total Value Secured (TVS) metric provided by L2Beat. This assumes TVS correlates to overall platform activity and market size.
The market is segmented into major categories like DEXs, Lending, and Derivatives based on the types of protocols currently operating on zk-rollups. This assumes these segments represent a material share of current ecosystem activity.
The top protocols by TVS within each segment can be used to estimate segment sizes. This assumes the leading protocols represent a majority of segment activity.
Growth Rate Assumptions
A 3x growth rate over the next 12 months to be assumed based on the high historical growth rates observed in both the broader DeFi market and zk-rollup adoption over the last 1-2 years.
This assumes growth remains strong but tempers down slightly from historical levels as the market matures.
Adoption Rate Assumptions
Addressable market can be estimated at 10-50% of the TVS from liquidity-driven segments like DEXes and Lending.
This range is based on observed adoption S-curves for prior blockchain innovations like DeFi and NFTs, which often see early adoption in the 10-50% range.
It assumes protocols and developers will need time to integrate shared liquidity, limiting full instant adoption.
Value Proposition Assumptions
The value of shared liquidity can be assumed to come from factors like gas cost savings, capital efficiency, larger liquidity pools, and new composable dApps.
These benefits are hypothesized based on observations from prior interoperability enhancements like bridges and layer 2s.
The dollar value of these benefits can be estimated as a percentage of activity in liquidity-driven segments that can gain efficiencies from shared liquidity.
I will dive deep into the valuation and modeling around the TAM in detail later on and update the research as and when it happens.
…based on allocating tokens between isolated pools and zk-rollup shared pools, with the following objective:
Maximize the utility of liquidity providers (LPs) based on their allocations to isolated pools and shared pools on zk-rollups. The utility is derived from returns on their allocations minus associated costs.
NOTE: I haven’t used this formula with actual tokens. Directly used it with a simulation. It might not be accurate, but the main goal is to paint a picture of what activity under shared pools with shared liquidity might look like.
Objective Functions:
For a liquidity provider of type q (where q can be either risk-averse Type 1 or return-driven Type 2):
Where:
Returns are given by:
Costs are:
Variables:
I_i: Amount provider ( i ) allocates to isolated pools (decision variable).
S_zk: Amount provider ( i ) allocates to the shared pool on zk-rollups (decision variable).
E: Total tokens that each liquidity provider provides for allocation.
Parameters:
r : Base return for providing liquidity. It represents the basic expected return for supplying assets.
c : Additional return bonus specifically for zk-rollup shared pool due to composability benefits.
λ : Base liquidation cost parameter. It signifies the potential risk/cost of having assets in a liquidity pool that could be liquidated.
ø : Reduction factor for liquidation costs in shared pools on zk-rollups. It signifies that shared pools on zk-rollups have reduced liquidation risks compared to isolated pools.
w_q : Weight that provider type q places on returns vs. costs.
For Type 1 (risk-averse), this weight will be lower, indicating a higher concern for costs.
For Type 2 (return-driven), this weight will be higher, pointing to a preference for higher returns.
θ : A cost parameter that captures the cost/penalty associated with moral hazard.
Îł : Maximum fraction of returns that can be diverted by the provider as part of the moral hazard.
--
Here is a simpler explanation of the key variables and parameters:
Variables:
I_i - The amount of tokens provider i allocates to isolated liquidity pools. This is a decision variable they optimize.
S_zk - The amount provider i allocates to the shared liquidity pool on zk-rollups. This is another decision variable.
E - The total token endowment each provider has available to allocate.
Parameters:
r - The baseline expected return rate for supplying liquidity.
c - The additional return bonus only for the zk-rollup shared pool due to composability.
λ - The baseline liquidation risk percentage if assets are pulled from a liquidity pool.
φ - The reduction factor for liquidation risks in the zk-rollup shared pool vs isolated pools.
w_q - The weight provider type q places on maximizing returns vs minimizing costs based on their risk preference.
θ - The parameter that controls the penalty costs associated with moral hazard/funds diversion.
Îł - The maximum percentage of returns that can be diverted by a provider.
The variables represent the allocation decisions.
The parameters control the tradeoffs between returns and risks, and*
Together they allow modeling liquidity provider behavior based on different preferences and market conditions.
--
Allocation Constraint: The total allocation to isolated and shared pools should not exceed the provider's endowment.
Moral Hazard Constraint: The diversion due to moral hazard is limited to a fraction Îł of the returns from each pool.
In essence, this model captures the decision-making process of liquidity providers as they allocate their endowment between isolated pools and shared pools on zk-rollups.
The utility functions encapsulate their expected returns, potential costs, and the implications of moral hazard, given their individual risk preferences.
1. Initial Concept:
Model Objective: I started with a basic premise: Understand the liquidity provision behavior in rollups, specifically zk-rollups and optimistic rollups.
Initial Framework:
Liquidity providers can supply assets to isolated pools or shared pools on these rollups.
Every liquidity provision has associated returns and costs.
Rationale: The overarching idea was to capture the essence of DeFi ecosystems in the overcrowded rollup space. The primary trade-offs revolved around the decision of where to allocate liquidity for maximum utility.
2. Introducing Liquidity Provider Types:
Then, I explored the concept of different types of liquidity providers:
Type 1: Risk-Averse
Type 2: Return-Driven
Rationale: A homogenous pool of liquidity providers would oversimplify the real-world DeFi environment. Recognizing that liquidity providers have different risk and return preferences added complexity and realism.
3. Building the Basic Model:
Objective Function:
Variables:
( I_i ): Allocation to isolated pools.
Parameters:
r : Base return for providing liquidity.
c : Bonus return for composability in shared pools.
λ : Liquidation cost.
ø : Reduction factor for liquidation costs in shared pools.
Rationale: This structure captured the core trade-offs, but it was quite broad. The aim was to get a basic understanding before diving into nuances.
4. Incorporating Moral Hazard:
Enhancement: I integrated the concept of moral hazard, representing the potential diversion of returns by providers.
Rationale: Moral hazard is a real concern in DeFi. By incorporating this, we ensured the model reflected potential malicious or opportunistic behavior, adding depth.
5. Realizing Technical Limitations:
Refinement: As I have mentioned earlier, optimistic rollups cannot technically have shared liquidity pools due to composability constraints. So, focusing only on zk-rollups for shared liquidity.
Rationale: Ensuring the model's technical accuracy is crucial. Misrepresenting or oversimplifying the real-world architecture would limit its applicability and relevance.
6. Introducing Detailed Returns and Costs:
Enhancement: Detailed breakdown of returns into base returns and bonuses for composability. Separated costs into liquidation costs and moral hazard costs, with associated parameters.
Rationale: Increased granularity provided a clearer understanding of each component's contribution, allowing for more nuanced analyses.
7. Final Model Formulation:
After incorporating all the refinements and feedback, I finalized the model with the following objective functions for the two liquidity provider types:
Rationale: The final model aims to provide a holistic, nuanced, and technically accurate representation of shared liquidity in zk-rollups.
Conclusion: Throughout my thought process, the model underwent multiple iterations, each refining it further based on real-world complexities, technical intricacies, and potential scenarios. The iterative process ensured that the model was comprehensive, capturing the essence of shared liquidity dynamics in the context of zk-rollups.
NOTE: I haven’t used this formula with actual tokens. Directly used it with a simulation. It might not be accurate and the main goal is to paint a picture of what activity under shared pools might look like.
Imagine a DeFi environment that consists of:
A zk-rollup chain facilitating transactions for dApps.
Two types of liquidity providers: LP1 (Risk Averse) and LP2 (Return Driven).
For our example, let's use the following parameters:
r : Base return for providing liquidity = 0.1 (or 10%)
c : Additional return bonus exclusive for zk-rollup shared pool due to composability = 0.05 (or 5%)
λ : Base liquidation cost parameter = 0.02 (or 2% of the allocation)
ø : Reduction factor for liquidation costs in shared pools on zk-rollups = 0.5 (50% reduction in shared pools)
W1 : Weight that LP1 (Risk Averse) puts on returns vs costs = 0.3 (values safety more)
W2 : Weight that LP2 (Return Driven) puts on returns vs costs = 0.8 (values returns more)
E : Endowment or total amount of tokens that each liquidity provider has available for allocation = 100 tokens.
For LP1 (Risk Averse):
a. Returns Calculation: For every token allocated to the isolated pool, LP1 would expect a return of 10%. For every token allocated to the shared pool on zk-rollups, the return would be 15% (10% base + 5% bonus).
b. Costs Calculation: There's a liquidation risk of 2% on the isolated pool, but only a 1% risk (due to the 50% reduction) on the shared zk-rollup pool.
c. Utility Function for LP1:
Using the utility function:
Given the risk aversion, LP1 will tend to allocate more to isolated pools.
--
Liquidation Risk Calculations:
For the isolated pool, the base liquidation risk parameter λ is 2%.
- So if LP1 allocates say 70 ETH to the isolated pool, the liquidation risk cost is 0.02 * 70 = 1.4 ETH
For the shared pool on zk-rollups, there is a 50% reduction in liquidation risk represented by φ.
So the liquidation risk is 0.5 * λ = 0.5 * 0.02 = 0.01 or 1%
If LP1 allocates 30 tokens to the shared pool, liquidation cost is 0.01 * 30 = 0.3 tokens
In summary:
Isolated pool liquidation risk = 2% of allocation
Shared pool liquidation risk = 1% of allocation (due to 50% reduction from φ)
LP1 Utility Function:
U_1 = 0.3 * (Returns) - 0.7 * (Costs)
0.3 is the weight LP1 places on Returns, based on their risk aversion
0.7 is the weight LP1 places on Costs
Returns = 0.1 * I_i + 0.15 * S_zk
0.1 is the 10% base return rate
0.15 is the 15% return rate for the shared pool (10% base + 5% bonus)
Costs = 0.02 * I_i + 0.01 * S_zk
0.02 is the 2% liquidation risk for isolated pool
0.01 is the 1% liquidation risk for shared pool
Given the higher weight on Costs, LP1 will tend to allocate more to isolated pools to minimize liquidation risks.
--
a. Returns Calculation: Similar to LP1, for every token allocated to the isolated pool, LP2 would expect a return of 10%. For the shared pool on zk-rollups, it's 15%.
b. Costs Calculation: The liquidation risk remains the same: 2% on isolated and 1% on the shared pool.
c. Utility Function for LP2:
Given the preference for returns, LP2 will tend to allocate more to the shared zk-rollup pool for the composability bonus.
--
The utility function for LP2 is: U_2 = 0.8 Ă— (Returns) - 0.2 Ă— (Costs)
Where:
0.8 is the weight LP2 places on Returns, based on their return-seeking preference.
0.2 is the weight they place on Costs.
Returns are calculated as: 0.1 Ă— I_i + 0.15 Ă— S_zk
I_i is LP2's allocation to isolated pools
S_zk is their allocation to the zk-rollup shared pool
0.1 is the base return rate r
0.15 is the boosted return rate for the zk pool (r + c)
Costs are calculated as: 0.02 Ă— I_i + 0.01 Ă— S_{zk}
0.02 is the base liquidation risk (λ)
0.01 is the reduced liquidation risk for the zk pool (φ × λ)
By weighting Returns higher than Costs, LP2 focuses on maximizing yields.
The higher return rate of 0.15 for the zk-rollup shared pool versus 0.1 for isolated pools incentivizes LP2 to allocate more to the zk-pool to capitalize on the extra composability bonus.
--
By solving the optimization problems, we might find:
LP1 allocates, say, 70 ETH to isolated pools and 30 ETH to shared pools.
LP2, being return-driven, allocates 20 ETH to isolated pools and 80 ETH to shared pools.
Risk Aversion Impact: LP1's risk aversion leads them to prioritize isolated pools despite the composability bonus in shared pools.
Return-Driven Decisions: LP2 leans heavily into the shared zk-rollup pool, trying to capitalize on the composability bonus.
Shared Pool Attraction: Despite their risk-averse nature, even LP1 allocated a non-trivial amount to shared pools, showcasing the allure of the composability bonus.
Cost Implications: The reduced liquidation risk in shared pools plays a role in influencing allocations, especially for LP1.
My thoughts on this topic:
The Promise and Potential
Look, shared liquidity is a really exciting innovation that could seriously change the game in DeFi. By letting people easily move assets and liquidity between different blockchains and rollups, it makes the whole system way more efficient and usable.
Right now, everything's still pretty fragmented. It's a pain to have to constantly switch between protocols and pools on different chains. But shared liquidity with synchronous composability sews everything together into one seamless ecosystem.
And we're talking serious network effects here. The more protocols that plug into shared liquidity pools, the more valuable it becomes for new protocols to join too. It's like an economic flywheel - more connectivity and composability drives more adoption, which enhances connectivity, and on and on.
Models estimate that if this crosses an inflection point, the total addressable market for shared liquidity in DeFi could be multiple trillions in the future. Obviously there's a ton of uncertainty, but the potential is stunning.
The Path Forward
Now this isn't easy tech to build. Zk-rollups seem promising, but it's still early days. We need more research on how to scale cross-rollup transactions through innovations like recursive SNARKs. And coordinating sequencing in a decentralized way remains tricky.
But none of those are dealbreakers. As the tech improves and DeFi expands, we'll figure out how to make shared liquidity work efficiently at global scale. And developers are starting to recognize the huge value prop of designing interoperable protocols.
So overall, this is an incredibly exciting area. The vision of frictionless liquidity flow between integrated DeFi protocols could seriously change the game. Technical challenges remain, but there's enough potential value at stake to overcome them. Maybe not tomorrow, but in the near future, shared liquidity could become a common thread weaving together an open and composable world of DeFi.
The Vision
Looking ahead, a world of seamlessly interconnected DeFi protocols seems inevitable. Shared liquidity will be a critical pillar enabling this future.
While achieving ubiquitous, decentralized shared liquidity at a global scale remains difficult, the potential payoff makes it an urgent priority.
With continued innovation, one day shared liquidity and frictionless cross-chain interoperability will become commonplace. This will mark a profound shift for decentralized finance.
The formulas used in the detailed example for our shared liquidity model:
For each type of liquidity provider, we calculate the returns from their allocations to both the isolated and shared pools:
Where:
( r ) = Base return for providing liquidity.
Where: ( c ) = Additional return bonus exclusive for zk-rollup shared pool due to composability.
The costs are mainly the liquidation risks associated with the allocations:
Where: λ = Base liquidation cost parameter.
Where: ø = Reduction factor for liquidation costs in shared pools on zk-rollups.
The utility for each liquidity provider is a combination of the returns they get from their allocations minus the associated costs. The weight they put on returns versus costs is captured by the ( w ) parameter.
For LP1 (Risk Averse):
For LP2 (Return Driven):
Where: W1 and W2 represent the weights or importance that LP1 and LP2, respectively, give to returns over costs.
These formulas capture the trade-offs the liquidity providers make when deciding where to allocate their assets, considering both the potential returns and the associated risks or costs.
Thank you for reading through, and subscribe below for regular post updates.
I’d also appreciate it if you shared this with your friends, who would enjoy reading this.
You can contact me here: Twitter and LinkedIn.
You can also buy my keys at friend.tech by searching for 0xArhat.
If you find this deep dive analysis useful, please consider donating to 0x1de17b6c736bcd00895655a177535c2a33c6feba (Arbitrum, Ethereum, Optimism) and/or by minting an NFT for this & other blog posts by me.