-
-
Notifications
You must be signed in to change notification settings - Fork 7.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve tests in python_examples.py
.
#30115
base: main
Are you sure you want to change the base?
Conversation
python_examples.py
.python_examples.py
.
zerver/openapi/python_examples.py
Outdated
result = client.add_default_stream(stream_id) | ||
# {code_example|end} | ||
|
||
assert result["result"] == "success" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good set of changes given that these asserts are needed.
But it also seems like a pretty bad library design choice that these asserts would be needed. What I think a user would expect when using an API binding library like this, in Python or most other languages, is that if the request failed — if it came back with a 4xx or 5xx response instead of 2xx — then the call should throw, not return a result. Relying on the caller to check for an error seems highly error-prone.
(That doesn't mean we should necessarily invest resources in fixing this anytime soon. I'm not sure how many people use these Python bindings.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like 5xx responses do throw, but 4xx responses do not.
There are some 4xx errors checked here and notably validate_against_openapi_schema
will always return True
without checking the spec for those cases ...
if status_code.startswith("4"):
# This return statement should ideally be not here. But since
# we have not defined 400 responses for various paths this has
# been added as all 400 have the same schema. When all 400
# response have been defined this should be removed.
return True
So, in addition to adding these asserts for "success", it might be good to add a commit to assert for "error" in those 4xx tests. Or add a helper function here that checks/asserts the "result" field in the response, which defaults to checking for "success" and checks for "error" if we're explicitly testing an error response.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Vector73 - Thanks for making these improvements. I had a few thoughts. Let me know if you have any questions about my comments.
Also, see my separate comment related to Greg's note. I think it'd be good to check both the success and error tests, so maybe instead of adding all of these asserts, we add a helper function to do that check.
61ebec1
to
185462f
Compare
I have addressed the feedbacks by @laurynmm. I have also fixed some incorrect Also, there are 2 functions - |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Vector73 - Here's another round of comments for these changes.
I'll note that this would be easier to review if you possible split this into more than one commit. And doing that would also possibly help you self-review for some of the small inconsistencies I've noticed in this last round of review.
So, having a commit for updating the linkifier endpoints to pass the filter_id
parameter for example. And a separate commit for the helper function for getting subscription IDs. And so on.
Looking at how I split things out in #30135 might be helpful with that.
For those two error checks that are returning success:
- I noted that if I comment out all the other tests in
tools/api-tests
and just run the deactivate realm test, then I do get the error response ... though the error code isn't the one documented and as you noted it's a 401 error now :/ - I feel like you can work on figuring out those in a separate follow-up PR since they're already not working
For error checks here in general, I'm wondering if we:
- Remove the calls to
validate_against_openapi_schema
since that's not currently validating the error response anyway. - Add a check for the expected code in the error result.
I'll start a discussion in CZO and/or file an issue for validating error responses, but unfortunately it's a bit more complicated than adding error documentation to all the documented endpoints.
|
||
# {code_example|start} | ||
# Check whether a user is a subscriber to a given channel. | ||
user_id = 7 | ||
stream_id = 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the user_id
being set here, should we do a pass to move these variable assignments to be outside of the user-facing documentation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably can be done as a follow-up PR to these changes.
I have addressed the recent feedback by @laurynmm and split the changes into multiple commits. |
Creates a helper function `validate_message` to remove the repeated code for verifying the sent messages.
Creates a helper function `get_subscribed_stream_ids` to fetch subscribed streams' ids of the client. Updates various functions to remove hardcoded values in request body and used available API calls to fetch them.
Updates `*_realm_filter` functions to use `filter_id` from function argument rather than a hardcoded value.
Creates a helper function `validate_response_result` to verify the response of API request for each test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Vector73 - Thanks for splitting up your changes! It was much easier to review and inspired me to make a few more updates to these tests/this file.
I've pushed my updates here and it would be great for you to review those changes!
Adds
assert
statements to check if the request succeeded beforevalidating against the openapi schema.
Improves the robustness of tests in
python_examples.py
.Fixes:
Screenshots and screen captures:
Self-review checklist
(variable names, code reuse, readability, etc.).
Communicate decisions, questions, and potential concerns.
Individual commits are ready for review (see commit discipline).
Completed manual review and testing of the following: