Skip to content

Faster-whisper

Description

Faster-whisper is a reimplementation of OpenAI's Whisper model using CTranslate2, which is a fast inference engine for Transformer models. This container provides a Wyoming protocol server for faster-whisper.

Image

linuxserver/faster-whisper:latest

Categories

  • Uncategorized Services

Ports

  • 10300:10300/tcp

Volumes

ContainerBind
/config/opt/appdata/faster-whisper

Environment Variables

NameLabelDefaultDescription
PUIDPUID1024for UserID
PGIDPGID100for GroupID
TZTZEurope/Amsterdamspecify a timezone to use, see this [list](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List).
WHISPER_MODELWHISPER_MODELtiny-int8Whisper model that will be used for transcription. From `tiny`, `base`, `small` and `medium`, all with `-int8` compressed variants
WHISPER_BEAMWHISPER_BEAM1Number of candidates to consider simultaneously during transcription.
WHISPER_LANGWHISPER_LANGenLanguage that you will speak to the add-on.

Labels

KeyValue
traefik.enabletrue
traefik.http.routers.faster-whisper.ruleHost(`faster-whisper.{$TRAEFIK_INGRESS_DOMAIN}`)
traefik.http.routers.faster-whisper.entrypointshttps
traefik.http.services.faster-whisper.loadbalancer.server.port10300
traefik.http.routers.faster-whisper.tlstrue
traefik.http.routers.faster-whisper.tls.certresolverdefault
traefik.http.routers.faster-whisper.middlewarestraefik-forward-auth
mafl.enabletrue
mafl.titleFaster-whisper
mafl.descriptionFaster-whisper is a reimplementation of OpenAI's Whisper model using CTranslate2, which is a fast inference engine for Transformer models.
mafl.linkhttps://faster-whisper.{$TRAEFIK_INGRESS_DOMAIN}
mafl.icon.wraptrue
mafl.icon.color#007acc
mafl.status.enabledtrue
mafl.status.interval60
mafl.groupServices
mafl.icon.urlhttps://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/faster-whisper-logo.png

Licensed under the MIT License. Free for all use cases. For enterprise or academic support, please reach out to us.