We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
remove()
repack()
ZipFile
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This is a revised version of this comment.
ZipFile.remove(zinfo_or_arcname)
str
ZipInfo
'a'
'w'
'x'
ZipFile.repack(removed=None)
removed
Since repack() does not immediately clean up removed entries at the time a remove() is called, the header information of removed file entries may be missing, and thus it can be technically difficult to determine whether certain stale bytes are really previously removed files and safe to remove.
While local file entries begin with the magic signature PK\x03\x04, this alone is not a reliable indicator. For instance, a self-extracting ZIP file may contain executable code before the actual archive, which could coincidentally include such a signature, especially if it embeds ZIP-based content.
PK\x03\x04
To safely reclaim space, repack() assumes that in a normal ZIP file, local file entries are stored consecutively:
BadZipFile
Check the doc in the source code of _ZipRepacker.repack() (which is internally called by ZipFile.repack()) for more details.
_ZipRepacker.repack()
ZipFile.repack()
There has been opinions that a repacking should support mode 'w' and 'x' (e. g. #51067 (comment)).
This is NOT introduced since such modes do not truncate the file at the end of writing, and won't really shrink the file size after a removal has been made. Although we do can change the behavior for the existing API, some further care has to be made because mode 'w' and 'x' may be used on an unseekable file and will be broken by such change. OTOH, mode 'a' is not expected to work with an unseekable file since an initial seek is made immediately when it is opened.
📚 Documentation preview 📚: https://cpython-previews--134627.org.readthedocs.build/
Sorry, something went wrong.
Add remove() and repack() to ZipFile
6aed859
Most changes to Python blurb command-line tool.
If this change has little impact on Python users, wait for a maintainer to apply the skip news label instead.
skip news
📜🤖 Added by blurb_it.
5453dbc
Fix and optimize test code
80ab2e2
Handle common setups with setUpClass
setUpClass
72c2a66
Add tests for mode w and x for remove()
w
x
a4b410b
There was a problem hiding this comment.
The reason will be displayed to describe this comment to others. Learn more.
It probably would be better to raise an attributeError instead of a valueError here since you are trying to access an attribute a closed zipfile doesn’t have
This behavior simply resembles open() and write(), which raises a ValueError in various cases. Furthermore there has been a change from raising RuntimeError since Python 3.6:
open()
write()
ValueError
RuntimeError
Changed in version 3.6: Calling RuntimeError was raised.
Nicely inform @wimglenn about this PR. This should be more desirable and flexible than the previous PR, although cares must be taken as there might be a potential risk on the algorithm about reclaiming spaces.
The previous PR is kept open in case some folks are interested in it. Will close when either one is accepted.
Introduce _calc_initial_entry_offset and refactor
_calc_initial_entry_offset
a9e85c6
Optimize _calc_initial_entry_offset by introducing cache
236cd06
Introduce _validate_local_file_entry and refactor
_validate_local_file_entry
bdc58c7
Introduce _debug and refactor
_debug
c3c8345
Introduce _move_entry_data and rework chunk_size passing
_move_entry_data
1b7d75a
Refactor _validate_local_file_entry
51c9254
Add strict_descriptor option
strict_descriptor
0d971d8
bc865be
Fix and improve validation tests
8f0a504
- Separate individual validation tests. - Check underlying repacker not called in validation. - Use `unlink` to prevent FileNotFoundError. - Fix mode 'x' test.
Remove obsolete NameToInfo updating
0cb8682
Use zinfo rather than info
zinfo
info
a788a00
Raise on overlapping file blocks
ae01b8c
Rework writing protection
edee203
- Set `_writing` to prevent `open('w').write()` during repacking. - Move the protection logic to `ZipFile.repack()`.
Update doc
555ac78
@distantvale
@danny0838 I can share my own opinions about the desired behavior but of course not everyone would agree: I don't think extractall is a good analogy since it's read-only. Instead you should check for open('w'), write, write_str, and mkdir, each of which is a simple single entry handler. I don't see the need to separate into read/write, as I believe they should all be consistent. If certain functions, such as extract, have 2 versions - for single/multiple files, then it won't be an exception in the API, and if we find it useful, have both.
@danny0838 I can share my own opinions about the desired behavior but of course not everyone would agree:
I don't think extractall is a good analogy since it's read-only. Instead you should check for open('w'), write, write_str, and mkdir, each of which is a simple single entry handler.
extractall
open('w')
write
write_str
mkdir
I don't see the need to separate into read/write, as I believe they should all be consistent. If certain functions, such as extract, have 2 versions - for single/multiple files, then it won't be an exception in the API, and if we find it useful, have both.
The truth is that they are not consistent. When you say they should be consistent, do you mean there should be multiple file version for open('w'), write, write_str, and mkdir, or there should be no multiple file version for extractall, or every of them should be a single method that supports both single and list input?
There are stil design considerations even for your proposed members, for example: Should there be remove plus removeall, or a remove that accepts either a single entry or a list? I'd follow extract in this case. What to do if identical names are passed? Should they be treated as a set and deduplicated, or be removed for multiple times? I might be wrong here, but I think add allows adding multiple files with the same name, so in that case I'd remove multiple times. If I'mt wrong about add, I'd treat it as a set.
There are stil design considerations even for your proposed members, for example: Should there be remove plus removeall, or a remove that accepts either a single entry or a list?
There are stil design considerations even for your proposed members, for example:
remove
removeall
I'd follow extract in this case.
What to do if identical names are passed? Should they be treated as a set and deduplicated, or be removed for multiple times?
I might be wrong here, but I think add allows adding multiple files with the same name, so in that case I'd remove multiple times. If I'mt wrong about add, I'd treat it as a set.
add
Actually there is no ZipFile.add. Do you mean write and others?
ZipFile.add
What to do if any passed value causes an error (such as non-exist)? Just like in extractall, the names must be a subset of namelist().
What to do if any passed value causes an error (such as non-exist)?
Just like in extractall, the names must be a subset of namelist().
namelist()
In extractall each input is handled one by one, and any error causes subsequent inputs not handled. However the current _remove_members handles removing and repacking simultaneously. If any error happens in the middle, the whole repacking is left partially done and the archive will be in an inconsistent state, which is unlikely an acceptable consequence.
_remove_members
Likewise, for extractall providing duplicated names just extract the same entry to the same filesystem path again; for removing this would be a totally different story.
Anyway, the current _remove_members implementation DOESN'T do what you state above. If you favor that approach and really want such behavior, you'd have to work on that PR, doing more tests and feedback (I'm probably not going to keep working on that unless it's the final decision of the issue). It would also be nice if you can provide the code of the implementation you've been using.
@distantvale @danny0838 I can share my own opinions about the desired behavior but of course not everyone would agree: I don't think extractall is a good analogy since it's read-only. Instead you should check for open('w'), write, write_str, and mkdir, each of which is a simple single entry handler. I don't see the need to separate into read/write, as I believe they should all be consistent. If certain functions, such as extract, have 2 versions - for single/multiple files, then it won't be an exception in the API, and if we find it useful, have both. The truth is that they are not consistent. When you say they should be consistent, do you mean there should be multiple file version for open('w'), write, write_str, and mkdir, or there should be no multiple file version for extractall, or every of them should be a single method that supports both single and list input?
What I'm saying is that there's already precendence in the package for a function that has 2 versions, so it's not out of the ordinary to add another one.
There are stil design considerations even for your proposed members, for example: Should there be remove plus removeall, or a remove that accepts either a single entry or a list? I'd follow extract in this case. What to do if identical names are passed? Should they be treated as a set and deduplicated, or be removed for multiple times? I might be wrong here, but I think add allows adding multiple files with the same name, so in that case I'd remove multiple times. If I'mt wrong about add, I'd treat it as a set. Actually there is no ZipFile.add. Do you mean write and others?
Yup, I mean write.
What to do if any passed value causes an error (such as non-exist)? Just like in extractall, the names must be a subset of namelist(). In extractall each input is handled one by one, and any error causes subsequent inputs not handled. However the current _remove_members handles removing and repacking simultaneously. If any error happens in the middle, the whole repacking is left partially done and the archive will be in an inconsistent state, which is unlikely an acceptable consequence.
The behavior of extractall might not be aligned with the documentation that states that members must be a subset of namelist() - hard to say what the intention was, but I think it's a legitimate check. I agree re. the inconsistent state, and maybe that's the most significant reason to keep the external API simple for now. Once the initial version is release, expanding it to more functionality is easier.
members
Yeah I agree, and since multiple files with the same names are not very common, I'd opt for treating members as a set.
My implementation is based on the original PR, so not much there in regards to these options.
In any case I'd proceed with your approach for the time being.
ZipFile.remove()
Update NEWS
ce88616
Sync with danny0838/zipremove@1691ca2
5f093e5
Revise NEWS
11c0937
Sync with danny0838/zipremove@1843d87
4b2176e
59259dc
Fix timezone related timestamp issue
d9824ce
Simplify tests with data descriptors
85811ab
Sync with danny0838/zipremove@e790427
748ac63
Sync with danny0838/zipremove@87bcdb5
001a8d0
Sync with danny0838/zipremove@6a78bd1
3a364ce
Some thoughts in an initial pass. This is a pretty big change so I haven't had time to go over the repack implementation in detail. Thank you for working on all of these changes. I can see an impressive attention to detail!
I think it would be good to add that which one gets removed is unspecified and should not be relied on (to defensively discourage mis-use).
Actually the current implementation is definite that the one mapped by ZipFile.getinfo(name) will get removed and the last one in the filelist with the same name (if exists) will be the new mapped one.
ZipFile.getinfo(name)
This is similar to what will be mapped by ZipFile.getinfo(name) if there are multiple zinfos with same name. The current implementation is always the last one in the filelist, though it's also undocumented.
The question is that should we document the definite behavior or state that it's actually undefined and the current behavior should not be relied on? Before the question is solved I would just keep the current statement.
I think I would lean towards saying the behavior is undefined in this method. I would want some discussion about documenting the behavior with multiple zinfos.
I haven't gone back and looked, but I have a vague recollection that multiple zip entries with the same name is/was a "normal" zip file legacy-ish "feature" as that was how replacing the contents of one zip file member was implemented in cases where the entire thing cannot be rewritten as it could be done solely by rewriting the end of the file and central directory.
@gpshead This is the convention of many ZIP tools, including Python, at least currently. Unfortunately it's not clearly documented in the ZIP spec.
I think it would be good to define what you mean by a stale local file entry here. In the third paragraph it is somewhat explained but I would suggest adding something earlier on about what a stale file entry is.
I think that the "stale local file entries" is the general abstract concept that this method does (and should do).
According to the context readers should be able to get that "stale local file entries" is defined as "local file entries referenced by the provided removed ZipInfo objects" when removed is provided, and "local file entries that are no longer referenced in the central directory (and meeting the 3 criteria)" when removed is not provided, and potentially another definition if more algorithm/mode is added in the future.
I do can be more explicit by saying "stale local file entries" is defined as above, but it would probably be too redundant.
Readers should not need to read multiple paragraphs to infer the meaning of a phrase in the first sentence of documentation when it can be briefly defined earlier on instead.
I think defining what "stale" means in a stale local file entry is would be sufficient, as that is not a term coming from appnote.txt, and only introduced in these docs.
As aforementioned, the definition of "stale" is difficult and almost as complex/long as the paragraph 2~4. Even if we provide the "definition" first, the reader still need to read the equally long sentences to get it.
can simply replace "stale" by "a local file entry that doesn't exist in the central directory" or something similar.
What about simply unreferenced local file entries?
ambiguous imo
This seems hard to work around if this isn't the behavior a user wants. i.e. if the bytes PK\003\004 appear before the first entry in some other format then a user cannot use repack according to my reading of this (I will revisit this once I read the implementation). Perhaps it would be better to take a start_offset (defaulting to ZipFile.data_offset or 0 maybe) that is the offset to start the search?
PK\003\004
start_offset
I think documenting the list of limitations of repack's scan would be an acceptable alternative to adding this.
Can you be more specific about what the "some other format" you are mentioning?
If it's something like:
[PK\003\004 and noise] [unreferenced local file entry] [first local file entry]
then the unreferenced local file entry will be removed (since it's "a sequence of consecutive entries with extra preceding bytes").
[PK\003\004 and noise] [first local file entry]
or
[unreferenced local file entry] [PK\003\004 and noise] [first local file entry]
then all bytes before the first local file entry will be preserved.
I think this is the reasonable behavior.
There are many formats that build on top of zip, such as doc, java class files, etc. These have prefixes and you need to be sure what you detect is an unreferenced file entry vs a local file entry magic and noise. As far as I am aware, there's no way to be certain what you have is an actual file entry. So there are potentially cases where the following layout could be a misinterpretation of a prefix to a zip file, and repack would incorrectly remove data from that prefix.
Yes there is no 100% definite way to define a sequence of bytes is a stale local file entry, that's why I call it a heuristic. But I think the current criteria is accurate enough for most real-world cases, and a false removal is very, very unlikely to happen.
If you don't agree with me, can you provide a good example or test case that normal bytes be mis-interepeted and falsely removed as local file entries before the first referenced local file entry? Without this it's kind of like a dry talk, which is non-constructive.
Given that heuristics and data scanning are involved I think it is worth while to add a .. note:: to the repack docs here stating that it (1) cannot be guaranteed" safe to use on untrusted zip file inputs or (2) does not guarantee that zip-like archives such as those with executable data prepended will survive unharmed.
.. note::
Realistically I think people with (2) style formats should know this but it is good to set expectations.
I'm asking for (1) preemptively because we're basically guaranteed to get security "vulnerability" reports about all possible API behaviors today [I'm on the security team - we really do] so pre-documenting the thing makes replying to those easier. 😺 Given we are never going to be able to prevent all such DoS and zipbomb, frankenzip, or multiple-interpretations-by-different-tooling-zip format reports given the zip format definition combined with the collective behavior of the worlds implementations is so... fuzzy.
can you provide a good example or test case that normal bytes be mis-interepeted and falsely removed as local file entries before the first referenced local file entry?
That's exactly the kind of thing someone would do in a future security@ report. :P We can stay ahead of that by at least not offering a guarantee of safety for this API. The first innocent real world starting points that come to mind are zip files embedded stored within zip files. And file formats involving multiple zip files within them where only one of them appears at the end of the file and thus acts like a zip to normal zip file tooling such as zipfile seeking out the end of file central directory. Those might not trigger a match in your logic as is, but work backwards from the logic and there may be ways to construct some that could? I'm just suggesting we don't try to overthink our robustness and make guarantees here.
zipfile
For (1): not only ZipFile.repack but the whole zipfile package is not guaranteed to be safe on an untrusted ZIP file. If a note or warning is required, it probably should be for the whole package rather than for this method only.
ZipFile.repack
For (2): this is what repack want to at least guarantee. The current algorithm should be quite safe against a false positive as it's very unlikely that random binary bytes would happen to form a local file header magic and happen to have its "entry length" ends exactly at the position of the first referenced local file header (or the next "local file entry").
repack
This can even be improved by providing an option to check for CRC when validating every seemingly like "local file entry". Though it would impact performance significantly and I don't think it worth.
A prepended zip should be safe since it has a central directory, which will be identified as "extra following bytes" and skipped from stripping. Unless the zip is abnormal, e.g. having the last local file entry overlapping in the central comment and thus having no additional bytes after its end.
A zip embedded as the content of a member is also 100% safe since the algorithm won't strip anything inside a local file entry.
A zip embedded immediately after a local file entry will be falsely stripped, but it's explicitly precluded by the documented presumption that "local file entries are stored consecutively", and should be something unlikely to happen on a normal zip-like file.
Given that the current documentation already explains its assumption and algorithm, I expect that the developer be able to estimate the risk on his own. Although it's not 100% safe, worrying about this may be something like worrying about a repository breaking due to SHA-1 collision when using Git. I agree that it would be good to set a fair expectation on the heuristics based algorithm and encourage the usage of providing removed for better performance and accuracy, but I also don't want to give an impression that the algorithm is something fragile and could easily blow on a random input. Don't you think it's overkill for Git to show a big warning saying that it doesn't guarantee your data won't break accidentally?
Anyway, I'm open to this. It's welcome if someone can provide a good rephasing or note without such issues.
Revised in 93db94a26
Sync with danny0838/zipremove@092f98b
0832528
Revise doc for repack
f20ec5d
Revise doc for remove
8e69c09
Update data_offset
data_offset
725b1a3
45942ab
Why not default to strict_descriptor=True given that it performs better and the zip files we expect people to be manipulating in remove/repack manners are presumed most likely to be "modern" forms?
This is exactly one of the open question(#134627 (comment)).
The current quick decision is primarily since it adheres better to the spec and most Python stdlib tend to prioritize compatibility than performance. E.g. json.dump with ensure_ascii=True and http.server with HTTP version 1.0. But it's not solid and can be changed, based on a vote or something?
json.dump
ensure_ascii=True
http.server
9e82bb7
93db94a
Sync with danny0838/zipremove@8bedf7c
72673e0
Sync with danny0838/zipremove@86a240b
e926a95
gpshead gpshead left review comments
merwok merwok left review comments
jaraco jaraco approved these changes
emmatyping Awaiting requested review from emmatyping
distantvale distantvale left review comments
sharktide sharktide approved these changes
Successfully merging this pull request may close these issues.