mirror of
https://github.com/jellyfin/jellyfin.git
synced 2026-01-23 23:20:51 +01:00
High CPU usage due to MediaSegments and TrickplayInfos SELECT DbCommands #6773
Labels
No labels
area:database
awaiting-feedback
backend
blocked
breaking change: web api
bug
build
ci
confirmed
discussion needed
dotnet future
downstream
duplicate
EFjellyfin.db
enhancement
feature
future
github-actions
good first issue
hdr
help wanted
invalid
investigation
librarydb
live-tv
lyrics
media playback
music
needs testing
nuget
performance
platform
pull-request
question
regression
release critical
requires-web
roadmap
security
security
stale
support
syncplay
ui & ux
upstream
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: starred/jellyfin#6773
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Skaytacium on GitHub (Feb 26, 2025).
Description of the bug
I recently noticed extremely high CPU usage by the jellyfin process during playback (paused or resumed) of any content. It looks like the following when playback is paused:
and like the following during playback (completely uses up ~200% of CPU (2 cores)):
This happens regardless of the client (tried on jellyfin-media-player, jellyfin mpv shim and the jellyfin android app). This is on direct streaming, since the above pictures are the CPU usage of the jellyfin process and not any ffmpeg processes.
Since initially I figured it might be an issue with some of the plugins (namely, Playback Reporting and Kodi Sync Queue), I uninstalled both and checked but the issue still persisted.
I have already tried debugging this on the troubleshooting channel on the jellyfin server, but a solution wasn't found.
What seemed odd to me was the no.
Executed DbCommandlogs that seemed to match with the exact timing of the CPU usage spiking while paused and were constantly thrown during resumed playback. They were the following repeated over and over:These are clearly running every few milliseconds and as I mentioned, matched exactly with the CPU usage spikes. What's odd to me is that when I ran the following tasks, it took very little time to execute:
My Trickplay settings are set to default (shown down below in additional information).
Reproduction steps
What is the current bug behavior?
Extremely high CPU usage for no reason (since there is no transcoding happening).
What is the expected correct behavior?
Normal CPU usage (~10-20%).
Jellyfin Server version
10.10.0+
Specify commit id
No response
Specify unstable release number
No response
Specify version number
No response
Specify the build version
10.10.6
Environment
Jellyfin logs
FFmpeg logs
Client / Browser logs
No response
Relevant screenshots or videos
No response
Additional information
Debug logs for the server
@gnattu commented on GitHub (Feb 26, 2025):
Are you using things like JellyStats which will periodically perform expensive queries to the server?
@Skaytacium commented on GitHub (Feb 26, 2025):
Nope, the only external connections to the server are:
@Skaytacium commented on GitHub (Jun 7, 2025):
So, after roughly 8 hours of debugging it turns out the in the new version (10.10.7), that the issue is with
jellyfin-mpv-shim. There's already an issue opened there (jellyfin/jellyfin-mpv-shim#265), in which I'll probably do some more investigation and open a PR.For future reference and further clarification: After playing around with the source code, I found out that there's being a bunch of GET requests made to the
/Users/{UserId}/Items/{ItemId}endpoint. This matches theJellyfin.Api.Controllers.UserLibraryController.GetItemLegacyendpoint, and causes theGetTrickplayManifestsfunction to run. This then queries the database. This doesn't happen with the web client and jellyfin-media-player anymore, so maybe something changed in the web UI? No clue.@Skaytacium commented on GitHub (Jun 8, 2025):
This is embarrassing. I've found out what the issue was and it's got nothing to do with
jellyfin-mpv-shim. I'll give an explanation here and submit a PR soon. I'll be using the latest releaseThe function that causes CPU (and disk) load is
GetBaseItemDtos:c2cc27a8a9/Emby.Server.Implementations/Dto/DtoService.cs (L84)This basically runs an SQL query for every item mentioned in its parameters. Now, this function is called by, obviously, a lot of things. In this context, the culprit is something that is repeated very often and has a very large list of parameters. After some more thorough debugging, it turns out that on every progress update, this function is being called.
The
OnPlaybackProgressfunction callsUpdateNowPlayingItem.c2cc27a8a9/Emby.Server.Implementations/Session/SessionManager.cs (L870-L886)This update function can be extremely expensive, depending on whether if there is a queue.
c2cc27a8a9/Emby.Server.Implementations/Session/SessionManager.cs (L385)c2cc27a8a9/Emby.Server.Implementations/Session/SessionManager.cs (L457-L468)In my previous comment, the observations correlate entirely with the way this might work. Smaller shows don't cause this issue, movies don't cause this issue and when one is towards the end of the show (remaining queue is small), this issue is not caused. Of course, even with a queue of just 3 episodes, the server load on a weaker machine (such as my Raspberry Pi 5) is significant enough to not only be noticeable, but cause slight slowdown.
jellyfin-mpv-shimis a client that queues the remaining episodes of a show,jellyfin-media-playerdoes not do this, but it used to. This also makes sense why the CPU load is caused even when the show is paused. Direct streaming obviously doesn't cause this issue, since there are no progress updates being posted. This is consistent with all of the observations and has been proven by the following logs (from modified source code):I'll submit a PR soon enough, since I've been testing this out on the v10.10.7 tag, but the master branch still has the same behavior.