-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit the number of transactions and limit deployment priority in the mempool #2999
base: priority_fee
Are you sure you want to change the base?
Conversation
@@ -316,32 +340,38 @@ impl<N: Network> Consensus<N> { | |||
priority_fee: PriorityFee<N>, | |||
id: N::TransactionID, | |||
transaction: Transaction<N>, | |||
size: TransactionSize, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
size: TransactionSize, | |
new_queue_size: TransactionSize, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
either that, or https://github.com/AleoHQ/snarkOS/pull/2999/files#diff-8b822371aae41d2275688c666d4c5d5d304e67aa422628ee42c405b5e8d7f572R306 needs adjustment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My bad I was passing the wrong value: 7cd6dce
We need to pass the individual tx size so we can order the transaction by it
) -> Result<()> { | ||
// Get the current timestamp. | ||
let timestamp = now_nanos(); | ||
// Add the Transaction to the transactions_in_queue. | ||
let inserted = self.transactions_in_queue.lock().insert(id); | ||
ensure!(inserted, "Transaction '{}' exists in the queue", fmt_id(id)); | ||
// Add the size to the tracking transactions_queue_size. | ||
self.transactions_queue_size.fetch_update(Relaxed, Relaxed, |x| Some(x.saturating_add(size))).map_err(error)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.transactions_queue_size.fetch_update(Relaxed, Relaxed, |x| Some(x.saturating_add(size))).map_err(error)?; | |
self.transactions_queue_size.store(new_queue_size, Relaxed); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comment above
node/consensus/src/lib.rs
Outdated
// Remove the transactions from the queue index and mark them again freshly as seen. | ||
for (_, tx) in transactions.iter() { | ||
// Remove the transactions from the queue index, size tracker and mark them again freshly as seen. | ||
for ((_, _, negative_size, _), tx) in transactions.iter() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why use negative size here instead of subtracting?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we're storing a negative size, because I want to prioritize ordering smaller transactions as extra DoS protection. It is very ugly, but it makes sense, and is the same ugliness as we already have by storing negative timestamps... Ideas are welcome.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we just want to reverse the sorting order we can use cmp::Reverse
instead; it conveys the rationale better IMO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TIL, brilliant fee6b46
Motivation
It is unfortunate that we can't take speculative execution fees from deployments at the moment, so as a temporary fix this should prevent DoS attacks.
Setting the priority_fee to 0 means that in theory, there could be a situation where deployments will never come to the front of the queue. But then we're in a scenario where we are bombarded with fee-paying executions, which means the community has enough fun with the existing deployments. Alternatively, we could create a separate
deployments_queue
where we can always pop some off.