mirror of
https://github.com/jellyfin/jellyfin.git
synced 2026-01-23 23:20:51 +01:00
Possible memory leak on Linux version? Memory usage climbing 100MB per hour even when idle. Windows version not affected. #6359
Labels
No labels
area:database
awaiting-feedback
backend
blocked
breaking change: web api
bug
build
ci
confirmed
discussion needed
dotnet future
downstream
duplicate
EFjellyfin.db
enhancement
feature
future
github-actions
good first issue
hdr
help wanted
invalid
investigation
librarydb
live-tv
lyrics
media playback
music
needs testing
nuget
performance
platform
pull-request
question
regression
release critical
requires-web
roadmap
security
security
stale
support
syncplay
ui & ux
upstream
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: starred/jellyfin#6359
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @robotreto on GitHub (Oct 17, 2024).
This issue respects the following points:
Description of the bug
There are similar issues reported on suspected memory leaks of jellyfin server in the past couple of months :
https://github.com/jellyfin/jellyfin/issues/11551
https://github.com/jellyfin/jellyfin/issues/11588
However it could not be really reproduced by the developers. As my jellyfin server on a QNAP too is affected with ever-growing memory usage, and as I am myself a programmer, I decided to look into the issue and downloaded the source (master 10.10.0) and build the server under VS2022. However I could not really reproduce the issue, memory usage remained stable, at least under Windows 11. I used the exact same media libraries as on my QNAP, 11'000 MP4/MKV, 60'000 JPG and 10'000 MP3, about 18TB in total. So it seems it depends whether the server runs under Linux or Windows.

I started both servers at the same time, and monitored memory usage over the next 16 hours :
While running under Windows, memory-consumption is reasonable and also stable, even dropping when idle (which proves the function of the Garbage Collector), under Linux the usage is ever increasing, even when the server is idle, about 50-100MB per hour.
This difference between Windows and Linux could explain why the developers could not duplicate the issue, because they are probably developing under Windows, where the issue is not present.
This behavior has been reported also by other users, especially running jellyfin under docker. Some users report memory usage > 20 GB over time. Sadly all issues have been closed so far, with the problem still existing.
Reproduction steps
What is the current bug behavior?
At some point, the memory will be exhausted and the server will be terminated with out-of-memory exception. This could take many weeks, depending on how much RAM the host has.
What is the expected correct behavior?
Linux version should have a similar memory usage pattern as the windows version, and certainly not run into a out-of-memory exception after a certain time.
Jellyfin Server version
Master
Specify commit id
No response
Specify unstable release number
No response
Specify version number
10.9.10
Specify the build version
10.9.10, QNAP port by pdulvp
Environment
Jellyfin logs
FFmpeg logs
No response
Client / Browser logs
No response
Relevant screenshots or videos
No response
Additional information
One thing we have to be aware of: As the QNAP version is older than the current master (10.10.0), it might be that I compared apples and pears. Maybe the issue has already been fixed in 10.10.0.
And no, I am not a Linux-hater, I use Linux myself on some machines, but I have to admit I dont know how to program/debug under Linux.
@darthShadow commented on GitHub (Oct 17, 2024):
Not sure how QNAP does it's packaging, but can you verify whether you have the
MALLOC_TRIM_THRESHOLD_set for your Jellyfin Environment (Source:6beda5c94f/debian/conf/jellyfin (L28))?This was specifically added to partially address memory leaks seen earlier: https://github.com/jellyfin/jellyfin/pull/10454
There are a bunch of memory issues related to the dotnet runtime on Linux in their GH tracker if you want to take a look to see if any mentioned settings help control the memory usage. One example of such settings: https://github.com/dotnet/runtime/issues/49317#issuecomment-1719158573
@nyanmisaka commented on GitHub (Oct 17, 2024):
The community maintained pdulvp/jellyfin-qnap misses this environment variable.
@felix920506 commented on GitHub (Oct 17, 2024):
Closing this as this is an issue caused by a 3rd party package.
@robotreto commented on GitHub (Oct 17, 2024):
Which 3rd party package exactly ? If you mean QNAP, then please know the same momory runaway was also reported under other linux versions, including docker containers, so the QNAP package is not to blame.
@soakes commented on GitHub (Oct 17, 2024):
Just chiming in here, even adding in
MALLOC_TRIM_THRESHOLD_=131072ENV as shown in https://github.com/jellyfin/jellyfin/issues/11551 still OOM's. Even if @robotreto goes and adds this ENV in, it will not stop whats happening from happening. I really don't understand why this problem keeps being closed.@felix920506 commented on GitHub (Oct 17, 2024):
#11551 is a totally different issue from this one
@robotreto commented on GitHub (Oct 17, 2024):
agree, I added
export MALLOC_TRIM_THRESHOLD_=131072to the startup script, memory usage jumped from 500 to 2400 GB in just 4 hours, completely idling, so this hack did not help anything.And agree, why this memory runaway is being closed without further investigation.
@soakes commented on GitHub (Oct 17, 2024):
@felix920506 Really? Why are you saying that? can you englighten me on your thinking here?
@robotreto mentions that, there is possibly a leek as the RAM on the Linux version are just growing and growing without releasing it but this is not happening with Windows. This is exactly the same as the other issue which was also closed.
@robotreto commented on GitHub (Oct 17, 2024):
Tried to add that env variable to the startup script, restarted server, but it made no difference.