[Issue]: Parallel library scan tasks limit default size too high #6079

Closed
opened 2025-12-22 03:32:46 +01:00 by backuprepo · 5 comments
Owner

Originally created by @grobalt on GitHub (Jul 1, 2024).

Please describe your bug

The default setting is "" - empty. That causes many NAS (i tested mine and a few friends ones) to crash completly.

The combination of JELLYFIN_FFmpeg__analyzeduration=200M with endless ffprobes (full library scan, fresh install as example) causes out of memory. on my 13600k it ran more than 250 ffprobe processes on 8 spinning disks, they dont have enough IO and therefore never finished, eating up nearly 50GB of RAM and still crashed.

Reproduction Steps

Reproduction - run full scan with current version and default settings on pure spinning disk NAS / Unraid. I reproduced around 25 freezes in the last 24 hours ...

My recommendation is to set a default limit of 1 or 2 ... or many many people will crash disk based NAS. If someone has a flash based storage the limit can be increased, but the default value should be a safe number.

Jellyfin Version

Master branch

if other:

10.9.7

Environment

- OS: unraid 6.12.10
- Linux Kernel:
- Clients: 0
- Browser: chrome
- Networking: 2.5Gbit
- Storage: 8x 18TB Enterprise SATA Western Digital Gold

Jellyfin logs

no useful entry in jellyfin log, even set to debug. it just crashes the whole server.

FFmpeg logs

No response

Please attach any browser or client logs here

No response

Please attach any screenshots here

WhatsApp Bild 2024-07-01 um 16 01 25_f94cf85b
WhatsApp Bild 2024-07-01 um 16 41 28_ebce7025
WhatsApp Bild 2024-07-01 um 16 43 57_7c84cd8b

Code of Conduct

  • I agree to follow this project's Code of Conduct
Originally created by @grobalt on GitHub (Jul 1, 2024). ### Please describe your bug The default setting is "" - empty. That causes many NAS (i tested mine and a few friends ones) to crash completly. The combination of JELLYFIN_FFmpeg__analyzeduration=200M with endless ffprobes (full library scan, fresh install as example) causes out of memory. on my 13600k it ran more than 250 ffprobe processes on 8 spinning disks, they dont have enough IO and therefore never finished, eating up nearly 50GB of RAM and still crashed. ### Reproduction Steps Reproduction - run full scan with current version and default settings on pure spinning disk NAS / Unraid. I reproduced around 25 freezes in the last 24 hours ... My recommendation is to set a default limit of 1 or 2 ... or many many people will crash disk based NAS. If someone has a flash based storage the limit can be increased, but the default value should be a safe number. ### Jellyfin Version Master branch ### if other: 10.9.7 ### Environment ```markdown - OS: unraid 6.12.10 - Linux Kernel: - Clients: 0 - Browser: chrome - Networking: 2.5Gbit - Storage: 8x 18TB Enterprise SATA Western Digital Gold ``` ### Jellyfin logs ```shell no useful entry in jellyfin log, even set to debug. it just crashes the whole server. ``` ### FFmpeg logs _No response_ ### Please attach any browser or client logs here _No response_ ### Please attach any screenshots here ![WhatsApp Bild 2024-07-01 um 16 01 25_f94cf85b](https://github.com/jellyfin/jellyfin/assets/79084968/3df6e80d-f432-42df-b10c-8330621c8dbe) ![WhatsApp Bild 2024-07-01 um 16 41 28_ebce7025](https://github.com/jellyfin/jellyfin/assets/79084968/2b01574f-ce4d-4966-b739-ebb713d58222) ![WhatsApp Bild 2024-07-01 um 16 43 57_7c84cd8b](https://github.com/jellyfin/jellyfin/assets/79084968/028a9beb-99a7-4195-9483-6b527e7f0d80) ### Code of Conduct - [X] I agree to follow this project's Code of Conduct
backuprepo 2025-12-22 03:32:46 +01:00
Author
Owner

@jellyfin-bot commented on GitHub (Jul 1, 2024):

Hi, it seems like your issue report has the following item(s) that need to be addressed:

  • You have not filled in the environment completely.

This is an automated message, currently under testing. Please file an issue here if you encounter any problems.

@jellyfin-bot commented on GitHub (Jul 1, 2024): Hi, it seems like your issue report has the following item(s) that need to be addressed: - You have not filled in the environment completely. This is an automated message, currently under testing. Please file an issue [here](https://github.com/jellyfin/jellyfin-triage-scripts/issues) if you encounter any problems.
Author
Owner

@cvium commented on GitHub (Jul 1, 2024):

The default is actually Environment.ProcessorCount

@cvium commented on GitHub (Jul 1, 2024): The default is actually [Environment.ProcessorCount](https://learn.microsoft.com/en-us/dotnet/api/system.environment.processorcount?view=net-8.0)
Author
Owner

@grobalt commented on GitHub (Jul 1, 2024):

The default is actually Environment.ProcessorCount

okay ... somehow makes sense ... but no spinning disk likes current core counts as parallel threads :) only usable for old CPUs with low core count

@grobalt commented on GitHub (Jul 1, 2024): > The default is actually [Environment.ProcessorCount](https://learn.microsoft.com/en-us/dotnet/api/system.environment.processorcount?view=net-8.0) okay ... somehow makes sense ... but no spinning disk likes current core counts as parallel threads :) only usable for old CPUs with low core count
Author
Owner

@felix920506 commented on GitHub (Jul 2, 2024):

From your post on the Unraid forums, you are using the Linuxserver.io image. That image has broken ffprobe thread limiting. Please switch to the official image.

@felix920506 commented on GitHub (Jul 2, 2024): From [your post on the Unraid forums](https://forums.unraid.net/topic/80744-support-linuxserverio-jellyfin/?do=findComment&comment=1437149), you are using the Linuxserver.io image. That image has broken ffprobe thread limiting. Please switch to the official image.
Author
Owner

@grobalt commented on GitHub (Jul 2, 2024):

From your post on the Unraid forums, you are using the Linuxserver.io image. That image has broken ffprobe thread limiting. Please switch to the official image.

i changed yesterday to the official - still not good using core count as parallel threads. this does not help .. the intention of this issue was to change the default parallel thread setting of new installations to 1 or 2 ...

@grobalt commented on GitHub (Jul 2, 2024): > From [your post on the Unraid forums](https://forums.unraid.net/topic/80744-support-linuxserverio-jellyfin/?do=findComment&comment=1437149), you are using the Linuxserver.io image. That image has broken ffprobe thread limiting. Please switch to the official image. i changed yesterday to the official - still not good using core count as parallel threads. this does not help .. the intention of this issue was to change the default parallel thread setting of new installations to 1 or 2 ...
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: starred/jellyfin#6079
No description provided.