Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modify perf CDT tests to use the im4gn.xlarge instance type #18556

Merged
merged 2 commits into from
May 21, 2024

Conversation

ballard26
Copy link
Contributor

This PR modifies the existing perf CDT tests to use a much small 3x im4gn.xlarge. The hope of doing this is to allow us to run more regression tests at the same cost as the existing tests.

Backports Required

  • none - not a bug fix
  • none - this is a backport
  • none - issue does not exist in previous branches
  • none - papercut/not impactful enough to backport
  • v24.1.x
  • v23.3.x
  • v23.2.x

Release Notes

  • none

"acks": "all",
"linger.ms": 1,
"max.in.flight.requests.per.connection": 5,
"batch.size": 131072,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets set this to 16KB? It's a lot more realisitic and avoids us running in larger forced batch sizes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch, switching it to the default of 16384.

"consumer_per_subscription": 200,
"producers_per_topic": 200,
"producer_rate": 150_000,
"consumer_per_subscription": 1,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess a lot of consumers aren't really relevant for this test?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the goal here is to stress the produce path as much as possible. So I set this to reduce to load from the consumers to the lowest possible.

"acks": "all",
"linger.ms": 1,
"max.in.flight.requests.per.connection": 10,
"batch.size": 1024,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just set to 1?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I keep forgetting we can do that. Switching it now.

"enable.idempotence": "false",
"acks": "all",
"linger.ms": 1,
"max.in.flight.requests.per.connection": 10,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should really test we idempotence on as it's the more common codepath.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough, switching it over.

"producer_rate": 150_000,
"consumer_per_subscription": 1,
"producers_per_topic": 10,
"producer_rate": 30_000,
"message_size": 1024,
"payload_file": "payload/payload-1Kb.data",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could even use the 200 byte message size here or something

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me see if I can push the producer rate a bit higher with smaller messages.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unsurprisingly using smaller messages doesn't allow for a higher producer rate.

@ballard26 ballard26 merged commit a3bba50 into redpanda-data:dev May 21, 2024
17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants