I built a docker image which contains all necessary tools and dependencies for opus2tonie.py. In addition to that I put a little shell script wrapper around it to offer even more possibilities (like batch encoding lots of episodes in a row):
Usage
The intention is to run this container on-demand and only as long as the file converions are running. Recommended is to run it by mounting your current host directory $(pwd) inside the container (/data), let it do the conversion and then it falls asleep again.
I added lots of usage examples on github, the easiest one being:
Convert a single file audiobook.mp3 from your current directory
To keep the container as small as possible, the base image is bookworm-lite as with integrated Python. Iâm also using ffmpeg static builds (which cut the size to a quarter).
I havenât deployed this container to a Docker registry yet, so please build it on your own for now (itâs fast!). Would be happy for any feedback!
This is amazing! Thank you very much, used it already multiple times.
I have an additional script which helps to create the required structure for my audio books by placing all episodes into the respective folder.
#!/bin/bash
# Funktion zur Organisation der HĂśrbuchdateien
organize_audiobooks() {
local folder_path="$1"
cd "$folder_path" || { echo "Ordner nicht gefunden!"; exit 1; }
# Dateien analysieren und organisieren
for file in *.ogg; do
# Teile den Dateinamen anhand des Patterns
# Beispiel: "Die Biene Maja - Majas Geburt - Teil 01.ogg"
base_name=$(basename "$file")
audiobook=$(echo "$base_name" | cut -d'-' -f1 | xargs) # HĂśrbuchname
episode_title=$(echo "$base_name" | cut -d'-' -f2 | xargs) # Episodentitel
part=$(echo "$base_name" | grep -oP 'Teil \d+' | xargs) # Teilnummer
# Hauptordner basierend auf dem HĂśrbuchnamen erstellen
main_folder="$audiobook"
mkdir -p "$main_folder"
# Unterordner basierend auf dem Episodentitel erstellen
episode_folder="$main_folder/$episode_title"
mkdir -p "$episode_folder"
# Datei in den entsprechenden Ordner verschieben
mv "$file" "$episode_folder/"
done
# ĂberprĂźfen, wie viele Dateien in jedem Ordner sind
echo "ĂberprĂźfung der Dateien in den Ordnern:"
for episode_folder in "$folder_path"/*/*; do
file_count=$(find "$episode_folder" -type f | wc -l)
if [ "$file_count" -eq 0 ]; then
echo "Warnung: Keine Dateien im Ordner '$episode_folder'."
else
echo "Im Ordner '$episode_folder' liegen $file_count Datei(en)."
fi
done
echo "HĂśrbuchdateien wurden erfolgreich organisiert!"
}
# Benutzer nach dem Ordner fragen
read -p "Gib den Pfad zum Ordner mit den Dateien an: " source_folder
organize_audiobooks "$source_folder"
I was wondering about where you got the information how to use the Teddycloud API? My itention is to modify the upload path, to place the generated .TAF files into the destination folder which reduce the effort.
I used the latter and checked the network tab while uploading a file with the GUI. Then I tried with curl and used the -F "file=@output_file" option instead (which is a little more easy).
You can create directories in your teddyCloud library like this:
You can also download files from your library via API. This is possible as taf or even es raw opus file (without the taf header). I will add a tap input parameter to this script which automatically downloads all needed files, splits existing tafs into their chapters and repackages them in a combined taf file. This will preserve the chapters inside a single episode (which is not possible with the native tap feature).
I just updated the Docker container with support for ARD Audiothek content. It uses their REST API to retrieve the content (no html scraping). Just copy & paste the link of the content and use it as -s parameter (source).
Usage
Command:
docker run --rm -v $(pwd):/data ghcr.io/marco79cgn/audio2tonie transcode -s "[AUDIOTHEK-URL]"
The output filename will be created automatically and produces valid filenames (without unsupported characters), like Hape.Birthday.ein.3nach9.-Spezial.zum.60.Geburtstag.von.Hape.Kerkeling.taf.
Yes, thatâs totally possible. I already integrated this in my dedicated script here:
It works for mini series or Podcasts up to 12 episodes. This is because I was scraping the html in this former script and the main page only includes the latest 12 items.
In the meantime I figured out how their API works - both for single items and whole program sets (Podcasts/Mini Series). I will implement it as soon as I find some sparetime. The only problem is that there are Podcasts with over 100 episodes and the limit for chapters in tafs is 99.
And as a cracy idea, what about providing it as a webstream�
Would it be possible to assign a âpodcast.shâ as source into a tonie-json fileâŚ
âŚand this would just return one episode after another just like it would be any other radio station (mpd liek) stream that ffmpeg in TC could handle?
Could be the case. Maybe you can change this somewhere in the settings of Docker Desktop? Iâm using the official âDocker for Macâ (which is the equivalent) and it works there. I havenât changed any settings.