server: add dynamic configuration for download variables #3960
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Issue
I've found that downloads can be unreliable for large models, either due to errors during the download or during processing the digest.
Workaround
Add some environment variables to the server to allow runtime configuration of the number of parallel downloads and how big the chunks can be.
Follow-up
Ideally the server would be configured to respect the
?partNumber
query parameters that the returnedx-amz-mp-parts-count
header implies should be supported. I'd also like it to supportx-amz-checksum-mode=ENABLED
(currently returns a501 NOT IMPLEMENTED
), so that each part number returns an expected digest in the response headers forGET
andHEAD
. This would enable us to split the digest by parts, so that if a part download fails, we don't need to retrieve the full model each time.It seems that the current implementation is done via github.com/distribution/distribution, which delegates its
Range
handling tohttp.ServeContent
, and as such does not support thepartNumber
functionality implied by the CloudFlare response headers. So any such support would have to fork github.com/distribution/distribution. CloudFlare also does not appear to support these headers.