Could somebody please help me get started with Docker (Specifically, docker-compose)?

Hello DLN Forum!
I recently got an old hand-me-down desktop computer with 8 gigabytes of RAM and an old i5 processor. I love computers and linux, so I am attempting to turn this computer into a home server (More for learning more about linux/systems administration than actual day-to-day use at this point). I had a few old 500GB drives lying around and a 120GB SSD, so I am using the SSD for Ubuntu Server and the 500GB drives in a ZFS RAIDZ1 array. I am attempting to run Nextcloud and Jellyfin in containers and make them accessible from anywhere via subdomain names with certificates from LetsEncrypt. Ideally, the Jellyfin container would be able to access the media files stored in Nextcloud’s database. So far, I have setup a domain with DuckDNS, configured my router to work with it, and forwarded ports 443 and 80 to my server, which I have assigned an IP address on my home network. I have just finished installing Ubuntu Server, and am going to install docker and docker-compose on it tomorrow, but other than that, it should be ready to go.
Unfortunately, I have been unable to figure out how to construct a docker-compose.yml file that would allow me to do this (almost everything I have been able to learn so far has been from Google…). I found this example docker-compose file provided by Nextcloud on Github (docker/docker-compose.yml at master · nextcloud/docker · GitHub), but I am at a loss how to incorporate Jellyfin into it in a way that allows Jellyfin to access the Nextcloud media files and have a HTTPS connections Also, how do I incorporate the fix in the folder that contains this sample docker-compose.yml file into my final docker-compose.yml (I would post a link to it, but I am limited to two links because I am new to the forum here)?

I know that I am asking a lot! Any guides or pointers at all would be much appreciated! Also, I wasn’t sure what category to put this help request under, so if it belongs somewhere else, please let me know. Thank you so much!

  • Distribution & Version: I am using Ubuntu server 20.04

This is an excellent source for learning all things Docker:

Here is a good youtube series on Docker. I think one of the episode cover Docker Compose:

Hope this helps.

2 Likes

I will check these resources out - thanks!

1 Like

You’re very welcome.

I use docker-compose for my containers. Let me know if you get stuck. If you’d like, give me an image you’re interested in and we’ll walk through it together. If you’d prefer, you can DM me.

1 Like

I’d also recommend DLN’s Sudo Show, that’s right up their alley and they have a Matrix channel here if you want extra help.

It’s good to know the overall objective but dividing it into bite sized chunks is going to be critical to making life easy for you and anyone helping. Try to narrow it down to 1 problem (and if that’s not the right problem someone may let you know), get that solved then move to the next.

Would love seeing this progress.

1 Like

Thank you, everybody for the fast (and very helpful) responses! I have bought the Docker Deep Dive book, and plan on working through it over the summer. If I run into anything I don’t understand (either in the book or when I am building my docker-compose.yml file for the home server), I will post the problem here, as well as let you know when I have finished the server. I have been listening to the Destination Linux podcast, but I will be sure to check out the Sudo Show as well. Have a great weekend, everybody!

There’s a scheduled Sudo Handout on 5/26. I can’t make it due to timing (I’ll still be at work). But, that is a great way to meet the hosts and talk tech with others.

No one asked me, But I would like to recommend limiting your docker compose exposure and go down the single host Kubernetes route using k3s. it goes much father than just the docker ecosystem

2 Likes

Or MiniKube…

Minikube was mainly designed to be used as a testing ground, I would not recommend it to be a home-prod :frowning:

Linux Server IO has great documentation as well. They also have a very active discord (just make sure you are in the correct channel when asking questions). I learned a lot from them.
You will want to run a reverse proxy assuming you want to access these containers off your lan. (the reason behind your duckdns) The LSIO swag container helps you do this.

My slimmed down compose file looks like this:
—
version: “3.1”
services:

###############
swag:
image: Package swag · GitHub
container_name: swag
cap_add:
- NET_ADMIN
env_file:
- /home/rastacalavera/.env
environment:
- PUID=1001
- PGID=1001
- “TZ=${TZ}”
- “URL=${swag_url}”
- SUBDOMAINS=wildcard
- VALIDATION=dns
- CERTPROVIDER=cloudflare #optional
- DNSPLUGIN=cloudflare #optional
# - DUCKDNSTOKEN= #optional
- “EMAIL=${swag_email}” #optional
- ONLY_SUBDOMAINS=true #optional
# - EXTRA_DOMAINS= #optional
- STAGING=false #optional
volumes:
- /opt/appdata/swag/config:/config
ports:
- 443:443
- 80:80 #optional
restart: unless-stopped

##############################################
cfddns:
container_name: cfddns
image: hotio/cloudflareddns
env_file:
- /home/rastacalavera/.env
environment:
- PUID=1001
- PGID=1001
- UMASK=002
- “TZ=${TZ}”
- INTERVAL=300
- DETECTION_MODE=dig-whoami.cloudflare
- LOG_LEVEL=3
- cloudflareddns
- “CF_USER=${swag_email}”
- “CF_APIKEY=${cfddns_api}”
# - CF_APITOKEN
# - CF_APITOKEN_ZONE
- “CF_HOSTS=${swag_url}”
- “CF_ZONES=${swag_url}”
- CF_RECORDTYPES=A;A:AAAA
volumes:
- /opt/appdata/cfdns:/config
restart: unless-stopped
########################################
jellyfin:
image: ghcr.io/linuxserver/jellyfin:latest #arm64v8-latest
container_name: stream
environment:
- PUID=1001
- PGID=1001
- “TZ=${TZ}”
- JELLYFIN_PublishedServerUrl=192.168.0.140 #optional
volumes:
- /opt/appdata/jellyfin/library:/config
- /home/rastacalavera/media/tv:/data/tvshows
- /home/rastacalavera/media/movies:/data/movies
- /home/rastacalavera/media/audiobooks:/data/audiobooks
- /opt/vc/lib:/opt/vc/lib #optional
ports:
- 8096:8096
- 8920:8920 #optional
- 7359:7359/udp #optional
- 1900:1900/udp #optional
devices:
# - /dev/dri:/dev/dri #optional
# - /dev/vcsm:/dev/vcsm #optional
- /dev/vchiq:/dev/vchiq #optional
- /dev/video10:/dev/video10 #optional
- /dev/video11:/dev/video11 #optional
- /dev/video12:/dev/video12 #optional
restart: unless-stopped
I use cloudflare for DDNS and a container to update it every two hours. You should learn about environment files so that you don’t have to put sensitive information into your compose file as well.

I found these two links helpful in learning Docker:

https://labs.play-with-docker.com/

1 Like

Ok. I have been playing around with the Nextcloud template that I linked to in my first post, and this is what I have so far:

version: '3'

services:
  db:
    image: mariadb
    command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
    restart: always
    volumes:
      - db:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD={db.env}
    env_file:
      - db.env

  redis:
    image: redis:alpine
    restart: always

  app:
    image: nextcloud:apache
    restart: always
    volumes:
      - nextcloud:/var/www/html
    environment:
      - VIRTUAL_HOST=nextcloud.mysecretdomain.duckdns.org
#      - LETSENCRYPT_HOST=
#      - LETSENCRYPT_EMAIL=
      - MYSQL_HOST=db
      - REDIS_HOST=redis
    env_file:
      - db.env
    depends_on:
      - db
      - redis
    networks:
      - proxy-tier
      - default

  cron:
    image: nextcloud:apache
    restart: always
    volumes:
      - nextcloud:/var/www/html
    entrypoint: /cron.sh
    depends_on:
      - db
      - redis

  proxy:
    build: ./proxy
    restart: always
    ports:
      - 80:80
      - 443:443
    labels:
      com.github.paulczar.omgwtfssl.nginx_proxy: "true"
    volumes:
      - certs:/etc/nginx/certs:ro
      - vhost.d:/etc/nginx/vhost.d
      - html:/usr/share/nginx/html
      - /var/run/docker.sock:/tmp/docker.sock:ro
    networks:
      - proxy-tier

#  letsencrypt-companion:
#    image: jrcs/letsencrypt-nginx-proxy-companion
#    restart: always
#    volumes:
#      - certs:/etc/nginx/certs
#      - vhost.d:/etc/nginx/vhost.d
#      - html:/usr/share/nginx/html
#      - /var/run/docker.sock:/var/run/docker.sock:ro
#    networks:
#      - proxy-tier
#    depends_on:
#      - proxy

# self signed
  omgwtfssl:
    image: paulczar/omgwtfssl
    restart: "no"
    volumes:
      - certs:/certs
    environment:
      - SSL_SUBJECT=mysecretdomain.duckdns.org
      - CA_SUBJECT=mysecretemail@secret.com
      - SSL_KEY=/certs/servhostname.local.key
      - SSL_CSR=/certs/servhostname.local.csr
      - SSL_CERT=/certs/servhostname.local.crt
    networks:
      - proxy-tier

volumes:
  db:
  nextcloud:
  certs:
  vhost.d:
  html:
networks:
  proxy-tier:

When I run docker-compose up -d, I can access Nextcloud from my Duckdns domain over HTTP, but I receive a “this site refused to connected” error when I attempt to access it via HTTPS. I am planning on using self-signed certs instead of certs provided by Letsencrypt at least until I figure out how to make HTTPS work. My nginx.conf file in the container looks like this:

# nextcloud.mysecretdomain.duckdns.org
upstream nextcloud.mysecretdomain.duckdns.org-upstream {
	# Cannot connect to network 'timothyc_default' of this container
	## Can be connected with "timothyc_proxy-tier" network
	# timothyc_app_1
	server 172.19.0.4:80;
}
server {
	server_name nextcloud.mysecretdomain.duckdns.org;
	listen 80 ;
	access_log /var/log/nginx/access.log vhost;
	location / {
		proxy_pass http://nextcloud.mysecretdomain.duckdns.org-upstream;
	}
}

Shouldn’t the docker-compose file be telling Nginx to listen on port 443 and to use the certs provided by the omgwtfssl container? I did try to point Nginx to the omgwtfssl container by changing the “labels” section from pointing to a Letsencrypt container to the omgwtfssl container, but that seemed to make no difference. Commenting out the both of the “labels” lines didn’t either. Any guidance would be great!