libavformat/mov.c: attacker-controlled count in mov_read_keys can trigger excessive allocations / OOM #229

Closed
opened 2025-12-23 10:36:28 +01:00 by backuprepo · 1 comment
Owner

Originally created by @fa1c4 on GitHub (Dec 20, 2025).

Description

While fuzzing ffmpeg-rockchip, I found that libavformat/mov.cmov_read_keys() can be forced into excessive memory allocation due to insufficient bounding of the user-controlled key count (count) read from the keys atom in MOV/MP4 metadata.

The current check prevents integer overflow in the pointer-array allocation, but still permits very large count values that cause multi-GB allocations (and then additional per-key allocations), leading to out-of-memory termination and a reliable denial-of-service when parsing a crafted file.

Affected code

libavformat/mov.cmov_read_keys()

Relevant snippet (current behavior):

count = avio_rb32(pb);
if (count > UINT_MAX / sizeof(*c->meta_keys) - 1) {
    av_log(c->fc, AV_LOG_ERROR,
           "The 'keys' atom with the invalid key count: %"PRIu32"\n", count);
    return AVERROR_INVALIDDATA;
}

c->meta_keys_count = count + 1;
c->meta_keys = av_mallocz(c->meta_keys_count * sizeof(*c->meta_keys));
if (!c->meta_keys)
    return AVERROR(ENOMEM);

for (i = 1; i <= count; ++i) {
    uint32_t key_size = avio_rb32(pb);
    uint32_t type = avio_rl32(pb);
    if (key_size < 8) {
        av_log(c->fc, AV_LOG_ERROR,
               "The key# %"PRIu32" in meta has invalid size:"
               "%"PRIu32"\n", i, key_size);
        return AVERROR_INVALIDDATA;
    }
    key_size -= 8;
    if (type != MKTAG('m','d','t','a')) {
        avio_skip(pb, key_size);
    }
    c->meta_keys[i] = av_mallocz(key_size + 1);
    if (!c->meta_keys[i])
        return AVERROR(ENOMEM);
    avio_read(pb, c->meta_keys[i], key_size);
}

Impact

  • crafted input can cause OOM and abort the process (observed under libFuzzer with a 2GB RSS limit).
  • This affects any application that parses attacker-controlled MOV/MP4 files through libavformat.

Root cause analysis

  • count is attacker-controlled (read from the file) and only checked for overflow safety, not for reasonable upper bounds relative to:
    • the enclosing atom size / remaining bytes, and/or
    • a defined maximum number of keys.
  • As a result, allocations of roughly ~count * sizeof(pointer) can still be enormous (multi-GB) and cause OOM.
  • Additionally, key_size is used to allocate key_size + 1 bytes per entry without a strict bound against the remaining bytes in the atom/stream, amplifying memory pressure.

Steps to reproduce

  1. Build/run with OSS-Fuzz style harness (or an equivalent libFuzzer harness for MOV parsing).
  2. Run the fuzzer binary against the attached crashing input:
./ffmpeg_dem_MOV_fuzzer oom-f528a3c32455549e702ad8b2c9f843770d63253b

Observed result (excerpt):

ERROR: libFuzzer: out-of-memory (used: 2950Mb; limit: 2048Mb)
To change the out-of-memory limit use -rss_limit_mb=<N>

Expected behavior

Parsing should reject unreasonable count / key_size values early (returning AVERROR_INVALIDDATA) rather than attempting allocations that can exhaust memory.

Actual behavior

The parser attempts to allocate extremely large buffers and the process is terminated due to OOM (reproducible under libFuzzer / constrained environments).

Environment

  • Repository: nyanmisaka/ffmpeg-rockchip
  • Branch/commit: <please fill in commit SHA / tag tested>
  • OS: <e.g., Ubuntu 20.04 x86_64>
  • Build config: <e.g., OSS-Fuzz libFuzzer ASan build flags>
  • Reproducer: oom-f528a3c32455549e702ad8b2c9f843770d63253b (attached)

Suggested fix / mitigation

Any of the following (or a combination) would address the DoS vector:

  1. Bound count using atom size / remaining bytes: ensure count cannot exceed what can be represented by the remaining payload, given a minimum per-entry size.
  2. Apply a reasonable hard cap on count (e.g., a maximum number of keys) to prevent unbounded allocations even if the atom claims huge counts.
  3. Validate key_size against remaining bytes before allocating/reading, and reject entries that would exceed the atom boundary or an upper limit.

I am happy to test a patch if you have a proposed change.

Attachments

https://github.com/user-attachments/assets/60c60f03-d187-40aa-b5ff-23a118ca028f

Originally created by @fa1c4 on GitHub (Dec 20, 2025). ## Description While fuzzing `ffmpeg-rockchip`, I found that `libavformat/mov.c` → `mov_read_keys()` can be forced into **excessive memory allocation** due to insufficient bounding of the user-controlled **key count** (`count`) read from the `keys` atom in MOV/MP4 metadata. The current check prevents integer overflow in the pointer-array allocation, but still permits **very large `count` values** that cause multi-GB allocations (and then additional per-key allocations), leading to **out-of-memory termination** and a reliable **denial-of-service** when parsing a crafted file. ## Affected code `libavformat/mov.c` → `mov_read_keys()` Relevant snippet (current behavior): ```c count = avio_rb32(pb); if (count > UINT_MAX / sizeof(*c->meta_keys) - 1) { av_log(c->fc, AV_LOG_ERROR, "The 'keys' atom with the invalid key count: %"PRIu32"\n", count); return AVERROR_INVALIDDATA; } c->meta_keys_count = count + 1; c->meta_keys = av_mallocz(c->meta_keys_count * sizeof(*c->meta_keys)); if (!c->meta_keys) return AVERROR(ENOMEM); for (i = 1; i <= count; ++i) { uint32_t key_size = avio_rb32(pb); uint32_t type = avio_rl32(pb); if (key_size < 8) { av_log(c->fc, AV_LOG_ERROR, "The key# %"PRIu32" in meta has invalid size:" "%"PRIu32"\n", i, key_size); return AVERROR_INVALIDDATA; } key_size -= 8; if (type != MKTAG('m','d','t','a')) { avio_skip(pb, key_size); } c->meta_keys[i] = av_mallocz(key_size + 1); if (!c->meta_keys[i]) return AVERROR(ENOMEM); avio_read(pb, c->meta_keys[i], key_size); } ``` ## Impact - crafted input can cause **OOM** and abort the process (observed under libFuzzer with a 2GB RSS limit). - This affects any application that parses attacker-controlled MOV/MP4 files through libavformat. ## Root cause analysis - `count` is attacker-controlled (read from the file) and only checked for **overflow safety**, not for **reasonable upper bounds** relative to: - the enclosing atom size / remaining bytes, and/or - a defined maximum number of keys. - As a result, allocations of roughly `~count * sizeof(pointer)` can still be enormous (multi-GB) and cause OOM. - Additionally, `key_size` is used to allocate `key_size + 1` bytes per entry without a strict bound against the remaining bytes in the atom/stream, amplifying memory pressure. ## Steps to reproduce ### Reproduce with the attached fuzzer input (recommended) 1. Build/run with OSS-Fuzz style harness (or an equivalent libFuzzer harness for MOV parsing). 2. Run the fuzzer binary against the attached crashing input: ```shell ./ffmpeg_dem_MOV_fuzzer oom-f528a3c32455549e702ad8b2c9f843770d63253b ``` Observed result (excerpt): ```shell ERROR: libFuzzer: out-of-memory (used: 2950Mb; limit: 2048Mb) To change the out-of-memory limit use -rss_limit_mb=<N> ``` ## Expected behavior Parsing should reject unreasonable `count` / `key_size` values early (returning `AVERROR_INVALIDDATA`) rather than attempting allocations that can exhaust memory. ## Actual behavior The parser attempts to allocate extremely large buffers and the process is terminated due to OOM (reproducible under libFuzzer / constrained environments). ## Environment - Repository: `nyanmisaka/ffmpeg-rockchip` - Branch/commit: **<please fill in commit SHA / tag tested>** - OS: **<e.g., Ubuntu 20.04 x86_64>** - Build config: **<e.g., OSS-Fuzz libFuzzer ASan build flags>** - Reproducer: `oom-f528a3c32455549e702ad8b2c9f843770d63253b` (attached) ## Suggested fix / mitigation Any of the following (or a combination) would address the DoS vector: 1. **Bound `count` using atom size / remaining bytes**: ensure `count` cannot exceed what can be represented by the remaining payload, given a minimum per-entry size. 2. **Apply a reasonable hard cap** on `count` (e.g., a maximum number of keys) to prevent unbounded allocations even if the atom claims huge counts. 3. **Validate `key_size` against remaining bytes** before allocating/reading, and reject entries that would exceed the atom boundary or an upper limit. I am happy to test a patch if you have a proposed change. ## Attachments https://github.com/user-attachments/assets/60c60f03-d187-40aa-b5ff-23a118ca028f
backuprepo 2025-12-23 10:36:28 +01:00
  • closed this issue
  • added the
    question
    label
Author
Owner

@nyanmisaka commented on GitHub (Dec 20, 2025):

For archival purposes, there are no plans to rebase the 6.x branch to the latest upstream commit.

Please use the latest 7.x and 8.x branch. https://github.com/nyanmisaka/ffmpeg-rockchip/branches

@nyanmisaka commented on GitHub (Dec 20, 2025): For archival purposes, there are no plans to rebase the 6.x branch to the latest upstream commit. Please use the latest 7.x and 8.x branch. https://github.com/nyanmisaka/ffmpeg-rockchip/branches
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: starred/ffmpeg-rockchip#229
No description provided.