Bash: teddycloud: command not found

Since the automatic readout of the Toniebox unfortunately does not work, I am currently trying the legacy way. I installed ESP and read out the data.

I have used these instructions as a basis: ESP32 | Toniebox Hacking

If I insert

teddycloud --esp32-extract tb.esp32.bin --destination certs/client/esp32

in the terminal, I get the message:bash:

teddycloud: command not found

What could be the reason? I have the Teddycloud server running successfully.
Thanks for the help

Teddycloud will be only available within the server.

Within the server means the computer where I installed the teddycloud? Or is there a terminal within the teddycloud? Cause I run the command on the server where teddycloud is up and running. Thanks @henryk

@waldgeist
You are using the Teddycloud Docker container, right?

Your docker host machine doesn’t know the teddycloud command. You have to connect to the container and run the command inside. Do this:

docker exec -it teddycloud bash

Now you are on a shell inside the container and here you can use your teddycloud command.

That makes total sense now that you say it! Docker and co is completely new to me and I’m trying to understand / learn it all right now.

I have one more question: I don’t have a PIP installed in Docker and the extracted ESP file on the Raspberry, not in the Docker container. I assume it is easiest / makes the most sense to create all the folders for extracting (see instructions) in the Docker container as well?

Thanks :slight_smile:

You can use Docker volumes to mount directories from the host system (Pi) into your Docker container. These directories are available both on the host and inside the container. Have a look here (first half of the comment):

This way you could actually do some steps inside and some outside the container.

Ok, I start to understand more. But still am a bit lost.

I updated the yaml file with the Volumes like here: Teddycloud CC3235 Newbie HowTo - #36 by inonoob

volumes:
      - /home/XXX/teddycloudfolder/certs:/teddycloud/certs
      - /home/XXX/teddycloudfolder/config:/teddycloud/config
      - /home/XXX/teddycloudfolder/data/content:/teddycloud/data/content
      - /home/XXX/teddycloudfolder/data/library:/teddycloud/data/library
      - /home/XXX/teddycloudfolder/data/firmware:/teddycloud/data/firmware
      - /home/XXX/teddycloudfolder/data/cache:/teddycloud/data/cache

But I can’t find a working solution to update the docker container.
The version I built the container with was simply:

volumes:
      - certs:/teddycloud/certs
      - config:/teddycloud/config
      - content:/teddycloud/data/content
      - library:/teddycloud/data/library
      - firmware:/teddycloud/data/firmware

But I do not find the shared folders on the pi …

(And of course I changed the XXX/teddycloudfolder to the right path)

If you use static folders, you have to make sure that those folders exist before you start the container. So you have to create them by yourself.

Create the folders and then restart the container. Are you using Portainer or do you start the docker-compose.yaml manually?

Okay, so I created the folders in the folder where the .yaml file is.
But If I restart the container (docker compose down, … pull, … up -d) and open the terminal of the docker (docker exec -it teddycloud bash) the file that I added is not in the folder I added on my PI.

And no: I am not using Portainer. :slight_smile:

There’s a misunderstanding. Let’s have a look at this line here underneath your volumes: definition in docker-compose.yaml (you said your user is also named pi) :

/home/pi/teddycloudfolder/certs:/teddycloud/certs

The part before the : is the folder on your host system. The part after the : is the path inside the container. So you mount /home/pi/teddycloudfolder/certs from your host system into the container at /teddycloud/certs.

This means that you have to create the folder /home/pi/teddycloudfolder/certs by yourself/manually either create this directory manually or docker will create it when running this docker-compose for the first time.

Any file that you copy inside this directory, will be available inside the container on path /teddycloud/certs. This directory already exists, the Teddycloud devs created it for you. Any files which the container creates in this folder and all subfolders underneath will be accessable from the outside on your host in /home/pi/teddycloudfolder/certs. This is the case for both directions. So if you copy files there, the container will see them.

This probably depends, haven’t tested it in detail. I don’t run docker rootless so at least portainer creates the directories as needed upon starting the container for the first time. I’m 99% sure that’s also the case if not using portainer.

@waldgeist probably the directory already exists. Pretty sure you would have encountered an error already if docker failed to create the bind mounts.

@chuckf You’re right, i mixed it with --mount (instead of --volume). The latter creates the bind-mount path on the host automatically, the first throws an error if it doesn’t exist.

But his main problem I guess was understanding that the first part is the actual directory on the host where the files have to be placed so that they are accessible inside the container:

So the folder of the yaml is pretty sure not the mounted directory and therefore not available inside the container.

After another restart of the PI Teddycloud is online.
And with absolut pathing it works like a charm. Now I was able to share a file between PI and Docker (yey).

So I will follow the next steps now and hope to have the flashed box ready soon.

A tow general question for my understanding

  1. If I change the yaml file, how do I load the changes into the docker container? Like this?
docker compose down
docker compose pull
docker compose up -d
  1. when I use relative paths (like in the prebuild file): Where is the shared folder on the Computer/PI?

That part I got, just not where the path leads to if it is not absolute.

And thanks for your patience. It is really great to get great support - even if some questions may be very very basic!

  1. just another docker-compose up -d is enough. It will detect changes in your file automatically and apply them

  2. the provided yaml from Teddycloud is using (native) Docker volumes, so you define them with just a single word/name. That’s the preferred option (from Docker’s point of view) compared to bind-mounts, where you define a complete path/directory on your host

You can check which volumes a container is using and where they are located on your host by running the docker inspect ... [CONTAINER] command. It prints all attributes of the container which is a wall of text. We can filter the output to just print the volumes like this:

docker inspect -f '{{range .Mounts}}{{.Type}}: {{.Source}} => {{.Destination}}{{println}}{{ end }}' teddycloud

Output:

volume: /var/lib/docker/volumes/teddycloud_cache/_data => /teddycloud/data/cache
volume: /var/lib/docker/volumes/teddycloud_content/_data => /teddycloud/data/content
volume: /var/lib/docker/volumes/teddycloud_firmware/_data => /teddycloud/data/firmware
volume: /var/lib/docker/volumes/teddycloud_library/_data => /teddycloud/data/library
volume: /var/lib/docker/volumes/teddycloud_certs/_data => /teddycloud/certs
volume: /var/lib/docker/volumes/b34f****3084/_data => /teddycloud/data/www/custom_img
volume: /var/lib/docker/volumes/teddycloud_config/_data => /teddycloud/config

Native Docker volumes are stored in the Docker area of your host, so on Linux it’s /var/lib/docker/volumes/..., followed by [container-name]_[volume-name] (teddycloud_certs for example).

Obviously you can also access those directories from your host, but be aware that they might have other permissions as they might belong to other users (docker or root). So you might need to get sudo to access them on your host because your user pi might not have access rights.

1 Like

That was very helpful for my understanding. Thank you Marco