[Issue]: Nightly docker container constantly OOM #5435

Closed
opened 2025-12-22 01:43:12 +01:00 by backuprepo · 11 comments
Owner

Originally created by @Joly0 on GitHub (Feb 5, 2024).

Please describe your bug

I am running the nightly docker image from lsio, which is based on the latest nightly build and jellyfin (i think its the ffprobe process) is constantly OOM my whole server.

This happens mostly only when i scan my media library. There is nothing really uncommon, just a normal movie library with about 150 movies. Nothing more.

Jellyfin Version

Other

if other:

10.9.0

Environment

- OS: Unraid
- Linux Kernel: 6.1
- Virtualization: Docker
- Clients: 
- Browser:
- FFmpeg Version: ffmpeg6
- Playback Method: 
- Hardware Acceleration:
- GPU Model: amd ipgu (r9 7950x)
- Plugins: 
- Reverse Proxy: nginx proxy manager
- Base URL:
- Networking:
- Storage: HDD

Jellyfin logs

No response

FFmpeg logs

No response

Please attach any browser or client logs here

No response

Please attach any screenshots here

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct
Originally created by @Joly0 on GitHub (Feb 5, 2024). ### Please describe your bug I am running the nightly docker image from lsio, which is based on the latest nightly build and jellyfin (i think its the ffprobe process) is constantly OOM my whole server. This happens mostly only when i scan my media library. There is nothing really uncommon, just a normal movie library with about 150 movies. Nothing more. ### Jellyfin Version Other ### if other: 10.9.0 ### Environment ```markdown - OS: Unraid - Linux Kernel: 6.1 - Virtualization: Docker - Clients: - Browser: - FFmpeg Version: ffmpeg6 - Playback Method: - Hardware Acceleration: - GPU Model: amd ipgu (r9 7950x) - Plugins: - Reverse Proxy: nginx proxy manager - Base URL: - Networking: - Storage: HDD ``` ### Jellyfin logs _No response_ ### FFmpeg logs _No response_ ### Please attach any browser or client logs here _No response_ ### Please attach any screenshots here _No response_ ### Code of Conduct - [X] I agree to follow this project's Code of Conduct
backuprepo 2025-12-22 01:43:12 +01:00
Author
Owner

@joshuaboniface commented on GitHub (Feb 5, 2024):

Without significantly more information about your system and Jellyfin instance, it's impossible to say why this might be happening. Please provide as much detail as possible.

Also, do note that unstable builds have been unreliable for quite some time now, with builds regularly failing as we work on our improved CI; we don't recommend running the unstable builds right now.

@joshuaboniface commented on GitHub (Feb 5, 2024): Without significantly more information about your system and Jellyfin instance, it's impossible to say why this might be happening. Please provide as much detail as possible. Also, do note that unstable builds have been unreliable for quite some time now, with builds regularly failing as we work on our improved CI; we don't recommend running the unstable builds right now.
Author
Owner

@Joly0 commented on GitHub (Feb 5, 2024):

Without significantly more information about your system and Jellyfin instance, it's impossible to say why this might be happening. Please provide as much detail as possible.

Also, do note that unstable builds have been unreliable for quite some time now, with builds regularly failing as we work on our improved CI; we don't recommend running the unstable builds right now.

I have tried to add some more information, but sadly i dont have much more.

Here is some part of my syslog, maybe thats any helpful:

Feb  5 15:40:56 Tower kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=579ea28da58d4bd7e832cf96e63041b690e67c754e9a10c6267bd1a4f14bcaf0,mems_allowed=0,oom_memcg=/docker/579ea28da58d4bd7e832cf96e63041b690e67c754e9a10c6267bd1a4f14bcaf0,task_memcg=/docker/579ea28da58d4bd7e832cf96e63041b690e67c754e9a10c6267bd1a4f14bcaf0,task=ffprobe,pid=18917,uid=99
Feb  5 15:40:56 Tower kernel: Memory cgroup out of memory: Killed process 18917 (ffprobe) total-vm:657628kB, anon-rss:486500kB, file-rss:0kB, shmem-rss:0kB, UID:99 pgtables:1172kB oom_score_adj:0
Feb  5 15:41:17 Tower kernel: asm_exc_page_fault+0x22/0x30
Feb  5 15:41:17 Tower kernel: RDX: 00007ffe69c70820 RSI: 0000000000000002 RDI: 000056547dd77008
Feb  5 15:41:17 Tower kernel: Tasks state (memory values in pages):
Feb  5 15:41:17 Tower kernel: [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
Feb  5 15:41:17 Tower kernel: [  21666]    99 21666   160616   118220  1187840        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  14646]    99 14646  9368427    73092  1572864        0             0 jellyfin
Feb  5 15:41:17 Tower kernel: [   9710]    99  9710    58228    15853   348160        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [   9726]    99  9726    55545    13188   323584        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [   9795]    99  9795    56736    14416   339968        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [   9803]    99  9803    59366    17002   360448        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10220]    99 10220    58518    16123   356352        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10221]    99 10221    56900    14552   339968        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10227]    99 10227    57247    14855   339968        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10229]    99 10229    56982    14648   331776        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10234]    99 10234    53776    11455   319488        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10313]    99 10313    60075    17700   364544        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10314]    99 10314    58758    16366   348160        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10315]    99 10315    58787    16427   360448        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10316]    99 10316    58503    16093   352256        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10317]    99 10317    58989    16622   348160        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10320]    99 10320    52928    10517   307200        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10324]    99 10324    58572    16225   348160        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10325]    99 10325    57140    14706   335872        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10327]    99 10327    58359    15996   356352        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10328]    99 10328    55518    13149   327680        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10332]    99 10332    56242    13873   335872        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10342]    99 10342    57314    14996   339968        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10344]    99 10344    52501    10050   303104        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10345]    99 10345    57676    15280   344064        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10346]    99 10346    56370    13946   331776        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10347]    99 10347    53362    10969   315392        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10348]    99 10348    63169    20776   389120        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10349]    99 10349    55706    13345   335872        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10351]    99 10351    57070    14682   352256        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10353]    99 10353    54642    12174   319488        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10354]    99 10354    56929    14524   335872        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10370]    99 10370    53730    11403   311296        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10371]    99 10371    59400    17037   360448        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10372]    99 10372    60281    17830   368640        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10373]    99 10373    59303    16943   356352        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10374]    99 10374    53994    11606   311296        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10375]    99 10375    59316    16971   360448        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10376]    99 10376    58963    16570   352256        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10377]    99 10377    58002    15613   356352        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10378]    99 10378    58794    16405   360448        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10379]    99 10379    57397    15007   335872        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10380]    99 10380    58264    15880   352256        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10381]    99 10381    56447    14046   331776        0             0 ffprobe
Feb  5 15:41:17 Tower kernel: [  10382]    99 10382    58865    16474   348160        0             0 ffprobe

Otherwise yes, i have noticed, that nightly builds are quite unstable, but i havent made a backup before going from stable to nightly and now i cant return without setting up everything from scratch, which i cant

@Joly0 commented on GitHub (Feb 5, 2024): > Without significantly more information about your system and Jellyfin instance, it's impossible to say why this might be happening. Please provide as much detail as possible. > > Also, do note that unstable builds have been unreliable for quite some time now, with builds regularly failing as we work on our improved CI; we don't recommend running the unstable builds right now. I have tried to add some more information, but sadly i dont have much more. Here is some part of my syslog, maybe thats any helpful: ``` Feb 5 15:40:56 Tower kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=579ea28da58d4bd7e832cf96e63041b690e67c754e9a10c6267bd1a4f14bcaf0,mems_allowed=0,oom_memcg=/docker/579ea28da58d4bd7e832cf96e63041b690e67c754e9a10c6267bd1a4f14bcaf0,task_memcg=/docker/579ea28da58d4bd7e832cf96e63041b690e67c754e9a10c6267bd1a4f14bcaf0,task=ffprobe,pid=18917,uid=99 Feb 5 15:40:56 Tower kernel: Memory cgroup out of memory: Killed process 18917 (ffprobe) total-vm:657628kB, anon-rss:486500kB, file-rss:0kB, shmem-rss:0kB, UID:99 pgtables:1172kB oom_score_adj:0 Feb 5 15:41:17 Tower kernel: asm_exc_page_fault+0x22/0x30 Feb 5 15:41:17 Tower kernel: RDX: 00007ffe69c70820 RSI: 0000000000000002 RDI: 000056547dd77008 Feb 5 15:41:17 Tower kernel: Tasks state (memory values in pages): Feb 5 15:41:17 Tower kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name Feb 5 15:41:17 Tower kernel: [ 21666] 99 21666 160616 118220 1187840 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 14646] 99 14646 9368427 73092 1572864 0 0 jellyfin Feb 5 15:41:17 Tower kernel: [ 9710] 99 9710 58228 15853 348160 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 9726] 99 9726 55545 13188 323584 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 9795] 99 9795 56736 14416 339968 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 9803] 99 9803 59366 17002 360448 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10220] 99 10220 58518 16123 356352 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10221] 99 10221 56900 14552 339968 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10227] 99 10227 57247 14855 339968 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10229] 99 10229 56982 14648 331776 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10234] 99 10234 53776 11455 319488 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10313] 99 10313 60075 17700 364544 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10314] 99 10314 58758 16366 348160 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10315] 99 10315 58787 16427 360448 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10316] 99 10316 58503 16093 352256 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10317] 99 10317 58989 16622 348160 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10320] 99 10320 52928 10517 307200 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10324] 99 10324 58572 16225 348160 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10325] 99 10325 57140 14706 335872 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10327] 99 10327 58359 15996 356352 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10328] 99 10328 55518 13149 327680 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10332] 99 10332 56242 13873 335872 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10342] 99 10342 57314 14996 339968 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10344] 99 10344 52501 10050 303104 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10345] 99 10345 57676 15280 344064 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10346] 99 10346 56370 13946 331776 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10347] 99 10347 53362 10969 315392 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10348] 99 10348 63169 20776 389120 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10349] 99 10349 55706 13345 335872 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10351] 99 10351 57070 14682 352256 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10353] 99 10353 54642 12174 319488 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10354] 99 10354 56929 14524 335872 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10370] 99 10370 53730 11403 311296 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10371] 99 10371 59400 17037 360448 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10372] 99 10372 60281 17830 368640 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10373] 99 10373 59303 16943 356352 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10374] 99 10374 53994 11606 311296 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10375] 99 10375 59316 16971 360448 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10376] 99 10376 58963 16570 352256 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10377] 99 10377 58002 15613 356352 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10378] 99 10378 58794 16405 360448 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10379] 99 10379 57397 15007 335872 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10380] 99 10380 58264 15880 352256 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10381] 99 10381 56447 14046 331776 0 0 ffprobe Feb 5 15:41:17 Tower kernel: [ 10382] 99 10382 58865 16474 348160 0 0 ffprobe ``` Otherwise yes, i have noticed, that nightly builds are quite unstable, but i havent made a backup before going from stable to nightly and now i cant return without setting up everything from scratch, which i cant
Author
Owner

@Joly0 commented on GitHub (Feb 5, 2024):

To add some more information about my system: I have an Ryzen 9 7950x with 64GB of RAM running Unraid/Slackware as the host os. Jellyfin is run inside a docker container, provided by linuxserver.io. This container is using the same executables as the official unstable builds, so same ffmpeg, same jellyfin-web, etc.

@Joly0 commented on GitHub (Feb 5, 2024): To add some more information about my system: I have an Ryzen 9 7950x with 64GB of RAM running Unraid/Slackware as the host os. Jellyfin is run inside a docker container, provided by linuxserver.io. This container is using the same executables as the official unstable builds, so same ffmpeg, same jellyfin-web, etc.
Author
Owner

@Joly0 commented on GitHub (Feb 5, 2024):

I have tried to limit the ram usage of the jellyfin container to 8GB, but that didnt work, because of cgroup limitations of the unraid host. So jellyfin is eating it up all. Also cpu usage is spiking very high

@Joly0 commented on GitHub (Feb 5, 2024): I have tried to limit the ram usage of the jellyfin container to 8GB, but that didnt work, because of cgroup limitations of the unraid host. So jellyfin is eating it up all. Also cpu usage is spiking very high
Author
Owner

@Joly0 commented on GitHub (Feb 8, 2024):

Also, do note that unstable builds have been unreliable for quite some time now, with builds regularly failing as we work on our improved CI; we don't recommend running the unstable builds right now.

Btw, is it possible to downgrade from the 10.9.0 unstable version to the latest 10.8.x stable version? I had tried it, had some issues and couldnt proceed. I have read, that there is no option to downgrade again and i would have to start from scratch. Is it somehow possible atleast to remain the users, as i dont want to recreate all my users with their settings, etc.
I dont mind all the other jellyfin settings, but users is annoying, as i would have to give everyone a new password then aswell

@Joly0 commented on GitHub (Feb 8, 2024): > Also, do note that unstable builds have been unreliable for quite some time now, with builds regularly failing as we work on our improved CI; we don't recommend running the unstable builds right now. Btw, is it possible to downgrade from the 10.9.0 unstable version to the latest 10.8.x stable version? I had tried it, had some issues and couldnt proceed. I have read, that there is no option to downgrade again and i would have to start from scratch. Is it somehow possible atleast to remain the users, as i dont want to recreate all my users with their settings, etc. I dont mind all the other jellyfin settings, but users is annoying, as i would have to give everyone a new password then aswell
Author
Owner

@felix920506 commented on GitHub (Feb 8, 2024):

@Joly0 No, there is no way to downgrade from unstable to 10.8.x

@felix920506 commented on GitHub (Feb 8, 2024): @Joly0 No, there is no way to downgrade from unstable to 10.8.x
Author
Owner

@Joly0 commented on GitHub (Feb 8, 2024):

Ok, was able to downgrade and keep my users :)

@Joly0 commented on GitHub (Feb 8, 2024): Ok, was able to downgrade and keep my users :)
Author
Owner

@solidsnake1298 commented on GitHub (Feb 10, 2024):

The LSIO nightly definitely behaves differently than the official unstable image. The number of ffmpeg processes is supposed to be limited to 2x the CPU threads, IIRC. The official unstable image behaves correctly, 16 ffmpeg processes on my 8 thread CPU, and overall system memory usage peaked around 4.4GB. The LSIO nightly spawned over 100 ffmpeg processes at one point, and memory usage continuously climbed to over 12GB before I stopped the scan.

@solidsnake1298 commented on GitHub (Feb 10, 2024): The LSIO nightly definitely behaves differently than the official unstable image. The number of ffmpeg processes is supposed to be limited to 2x the CPU threads, IIRC. The official unstable image behaves correctly, 16 ffmpeg processes on my 8 thread CPU, and overall system memory usage peaked around 4.4GB. The LSIO nightly spawned over 100 ffmpeg processes at one point, and memory usage continuously climbed to over 12GB before I stopped the scan.
Author
Owner

@Joly0 commented on GitHub (Feb 13, 2024):

Hey @solidsnake1298 i have asked on the linuxserver.io discord server, and this is what they said:
grafik

Can you say something about that?

@Joly0 commented on GitHub (Feb 13, 2024): Hey @solidsnake1298 i have asked on the linuxserver.io discord server, and this is what they said: ![grafik](https://github.com/jellyfin/jellyfin/assets/13993216/eb300655-0221-4168-84a0-7399fb0a043e) Can you say something about that?
Author
Owner

@solidsnake1298 commented on GitHub (Feb 15, 2024):

Hey @solidsnake1298 i have asked on the linuxserver.io discord server, and this is what they said: grafik

Can you say something about that?

Double checked that the LSIO image correctly detected CPU core/thread count (it did). So not sure what else to say beyond that SOMETHING is broken. Because in addition to the ffmpeg OOM issue, the Books library type in the LSIO image is straight up broken. No metadata providers and it doesn't track playback progress like the official image does.

@solidsnake1298 commented on GitHub (Feb 15, 2024): > Hey @solidsnake1298 i have asked on the linuxserver.io discord server, and this is what they said: ![grafik](https://private-user-images.githubusercontent.com/13993216/304495346-eb300655-0221-4168-84a0-7399fb0a043e.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MDgwMjkxMDIsIm5iZiI6MTcwODAyODgwMiwicGF0aCI6Ii8xMzk5MzIxNi8zMDQ0OTUzNDYtZWIzMDA2NTUtMDIyMS00MTY4LTg0YTAtNzM5OWZiMGEwNDNlLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAyMTUlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMjE1VDIwMjY0MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTlkMGNkYzA2YjQ2NzhjMjEzNzUzOWNhZjM4MmU2MDQ4YjRmOGUxNDk4M2ZjYjYxNzEyMjVlNmQ3MWFhNDEyOWYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.jpL4k8NdLJijA0ER_5UJ40cDm8CbNwp_aCzsG6vz6E8) > > Can you say something about that? Double checked that the LSIO image correctly detected CPU core/thread count (it did). So not sure what else to say beyond that SOMETHING is broken. Because in addition to the ffmpeg OOM issue, the Books library type in the LSIO image is straight up broken. No metadata providers and it doesn't track playback progress like the official image does.
Author
Owner

@aptalca commented on GitHub (Feb 16, 2024):

Linuxserver dev here. . .

We pretty much install jellyfin debs from the jellyfin apt repo in a fairly vanilla ubuntu jammy image, and start the server as an unprivileged user with the data folders set to /config. We do no modifications of upstream code. We don't touch anything related to jellyfin spawning ffmpeg/ffprobe processes, and we don't touch anything related to jellyfin's library handling.

Our image is as close as you can get to running official jellyfin binaries on an Ubuntu Jammy machine or VM.

I understand upstream devs often refrain from troubleshooting issues in downstream projects (which is often fair).

@Joly0 I recommend that you install jellyfin unstable from the jellyfin apt repo in an ubuntu jammy VM (instructions in our Dockerfile), reproduce the issues so you can submit the bug reports to jellyfin without any relation to the linuxserver docker image.

@aptalca commented on GitHub (Feb 16, 2024): Linuxserver dev here. . . We pretty much install jellyfin debs from the jellyfin apt repo in a fairly vanilla ubuntu jammy image, and start the server as an unprivileged user with the data folders set to /config. We do no modifications of upstream code. We don't touch anything related to jellyfin spawning ffmpeg/ffprobe processes, and we don't touch anything related to jellyfin's library handling. Our image is as close as you can get to running official jellyfin binaries on an Ubuntu Jammy machine or VM. I understand upstream devs often refrain from troubleshooting issues in downstream projects (which is often fair). @Joly0 I recommend that you install jellyfin unstable from the jellyfin apt repo in an ubuntu jammy VM (instructions in our [Dockerfile](https://github.com/linuxserver/docker-jellyfin/blob/nightly/Dockerfile#L17-L29)), reproduce the issues so you can submit the bug reports to jellyfin without any relation to the linuxserver docker image.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: starred/jellyfin#5435
No description provided.