Skip to content

CHIA-2136 have generate-chain fill compressed blocks #19932

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

jack60612
Copy link
Contributor

Purpose:

Changes generate-chain to generate and fill compressed blocks.

Current Behavior:

no compressed blocks

New Behavior:

compressed blocks using in memory coinstore, and also std mempool

Testing Notes:

@jack60612 jack60612 added the Changed Required label for PR that categorizes merge commit message as "Changed" for changelog label Aug 10, 2025
Copy link
Contributor

@arvidn arvidn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did you explore adding another option to BlockTools, next to include_transactions that fills the block to the max, using the BlockBuilder directly? I suspect that would end up being simpler without affecting existing tests

@@ -403,7 +403,10 @@ def setup_new_gen(
assert rng is not None
bundle, additions = make_spend_bundle(available_coins, wallet, rng)
removals = bundle.removals()
program = simple_solution_generator(bundle).program
if curr.height >= self.constants.HARD_FORK_HEIGHT:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a reasonable change. It doesn't affect the number of transactions per block (which is good, that should stay the same). It does, however, change the chain, so the test_cache would have to be updated. Doing so expands the scope of this task quite a lot, so I think it probably best to avoid it.

@jack60612
Copy link
Contributor Author

did you explore adding another option to BlockTools, next to include_transactions that fills the block to the max, using the BlockBuilder directly? I suspect that would end up being simpler without affecting existing tests

I'll look at this, I keep getting blocked and having to go down consensus rabbit holes.

@jack60612 jack60612 changed the title have generate-chain fill compressed blocks CHIA-2136 have generate-chain fill compressed blocks Aug 11, 2025
@github-actions github-actions bot added the merge_conflict Branch has conflicts that prevent merge to main label Aug 13, 2025
@github-actions github-actions bot removed the merge_conflict Branch has conflicts that prevent merge to main label Aug 13, 2025
@jack60612 jack60612 requested a review from arvidn August 13, 2025 18:59
assert rng is not None
assert curr.height >= self.constants.HARD_FORK_HEIGHT # we need new compression for BlockBuilder
# function constants
adjusted_max_cost: uint64 = uint64(self.constants.MAX_BLOCK_COST_CLVM * block_fillrate / 100)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect you could make this much simpler by not allowing the fill rate to be configured. We really just need full blocks (for the benchmark).

adjusted_max_cost: uint64 = uint64(self.constants.MAX_BLOCK_COST_CLVM * block_fillrate / 100)
static_cost: uint64 = uint64(4839648) # cond + exec cost per sb
static_comp_cost: uint64 = uint64(7684000) # byte cost per sb after compression
max_batch_size: int = int(1400 * 0.10 * block_fillrate / 100) # 10% of the way done with the block
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fewer iterations you need, the better performance. Did you experience some issue that was solved by having a max batch size?

# function constants
adjusted_max_cost: uint64 = uint64(self.constants.MAX_BLOCK_COST_CLVM * block_fillrate / 100)
static_cost: uint64 = uint64(4839648) # cond + exec cost per sb
static_comp_cost: uint64 = uint64(7684000) # byte cost per sb after compression
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this constant looks really suspicious. BlockBuilder will tell what the current block cost is (as long as the cost passed to add_spend_bundles() is correct).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we want to reduce iterations we have to guesstimate the compressed block cost, so this kinda helps us skip iterations

Comment on lines 460 to 461
val_after_next_sb = (len(batch_bundles) + 1) * static_comp_cost + total_cost
if val_after_next_sb > adjusted_max_cost or len(batch_bundles) == max_batch_size:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
val_after_next_sb = (len(batch_bundles) + 1) * static_comp_cost + total_cost
if val_after_next_sb > adjusted_max_cost or len(batch_bundles) == max_batch_size:
if len(batch_bundles) * static_cost + total_cost > adjusted_max_cost:

batch_removals: list[Coin] = []
batch_additions: list[Coin] = []
while len(avail_coins) > 0:
val_after_next_sb = (len(batch_bundles) + 1) * static_comp_cost + total_cost
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if static_comp_cost is really the true execution + condition cost for one transaction, why do you add 1 to len(batch_bundles)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to account for the next one that's about to be added, but ill revisit

@@ -712,9 +798,14 @@ def get_consecutive_blocks(
include_transactions: bool = False,
skip_overflow: bool = False,
min_signage_point: int = -1,
block_fillrate: Optional[int] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

an alternative would be to make include_transactions be an integer, 0=no transactions, 1=5 transactions per block, 2=full blocks

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good

include_transactions=include_transactions,
block_refs=block_refs,
)
if new_gen_cache is None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe this cache should be moved into setup_new_gen(), since it's duplicated now

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how would that persist across executions?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh wait the class, i forgot it was part of that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Changed Required label for PR that categorizes merge commit message as "Changed" for changelog
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants