Compare commits

...

71 Commits

Author SHA1 Message Date
Quentin Duchemin 52ec3a3529 Ugrade Lychee to v5 2024-10-26 13:52:02 +02:00
Quentin Duchemin e2654bd354 [FW] OVH mail config 2024-10-06 18:42:23 +02:00
Quentin Duchemin 5d55e37bee Update ARL 2024-07-29 15:31:30 +02:00
Quentin Duchemin 9594e79f36 Add CDN for serving files (SCW edge services) 2024-07-22 19:26:55 +02:00
Quentin Duchemin 22e1f5ab91 Switch to OVH DNS ACME challenge 2024-05-17 17:52:05 +02:00
Quentin Duchemin 22da9b0d67 Update ARL 2024-05-11 17:35:25 +02:00
Quentin Duchemin da0b30f001 Pre-fix for Peertube and updates for FW 2024-05-03 19:44:47 +02:00
Quentin Duchemin 502a07e1e8 Correctly create empty set 2024-03-22 19:36:26 +01:00
Quentin Duchemin 92575f2af0 [FW] Add filter for existant Deezer albums 2024-03-18 23:27:22 +01:00
Quentin Duchemin 5c127a5648 [FW] Add filter for existant Deezer albums 2024-03-18 23:26:49 +01:00
Quentin Duchemin 8bd938a5ed
Update ARL 2023-12-18 22:49:19 +01:00
Quentin Duchemin f0a307c5a2
Update ARL 2023-12-18 22:47:48 +01:00
Quentin Duchemin d10796848c Bump FW to 1.4.0 and incrase external timeout 2023-12-13 15:17:55 +01:00
Quentin Duchemin fb8f68892d
[Hugo] include content with publishdate in the future 2023-09-28 10:26:34 +02:00
Quentin Duchemin 268262426c
Bump Gitea, fix bad usage of environment
Effectively disabling self registrations
2023-08-10 15:41:24 +02:00
Quentin Duchemin 68257a9f01
Bump FW to 1.3.0 2023-06-11 22:43:57 +02:00
Quentin Duchemin c6304c8f40
Add Restic/autorestic backups for Funkwhale 2023-06-11 22:03:06 +02:00
Quentin Duchemin d564f74116
[FW] Bucket content is public 2023-05-26 17:50:32 +02:00
Quentin Duchemin 8bc30123ff
Enable PROXY_MEDIA and bump Funkwhale 2023-05-24 18:49:08 +02:00
Quentin Duchemin 4a949ccd21
Fix CORS for CouchDB 2023-05-24 18:48:46 +02:00
Quentin Duchemin c4c53bab26
Configure lastbeet plugin to use canonical genres rather than heavily specific ones 2023-04-06 18:41:46 +02:00
Quentin Duchemin 5982fd5f61
Status change 2023-03-29 18:58:45 +02:00
Quentin Duchemin 440a5a90bf
Remove obsolete ansible parameter 2023-03-07 18:46:45 +01:00
Quentin Duchemin e58cc67739
Stuff 2023-03-07 18:40:50 +01:00
Quentin Duchemin 6d0bd76ba7
Bump Hugo 2022-11-05 23:29:38 +01:00
Quentin Duchemin 9971db7f11
[MC] plugins 2022-11-03 15:53:44 +01:00
Quentin Duchemin 1b97f05da3
Add MC WL 2022-10-27 00:20:18 +02:00
Quentin Duchemin 8b8656ef73
Add MC port to firewall 2022-10-25 20:50:31 +02:00
Quentin Duchemin 65c7080f87
Fix env bool -> string 2022-10-25 00:36:55 +02:00
Quentin Duchemin dd2292bb76
Add Minecraft 2022-10-25 00:29:41 +02:00
Quentin Duchemin 64aa4addec
Update ARL 2022-08-23 16:39:36 +02:00
Quentin Duchemin 4792a7cc44
new arl 2022-05-30 11:45:23 +02:00
Quentin Duchemin 9b283deb63
New ARL 2022-05-16 11:57:32 +02:00
Quentin Duchemin 2d9ba15b7b
[FW] Bump to 1.2.3 2022-03-22 22:18:11 +01:00
Quentin Duchemin 5da61f24dc
Add CouchDB instance 2022-03-03 12:13:31 +01:00
Quentin Duchemin e2a562f04f
Remove Lola's blog files 2022-03-03 12:01:35 +01:00
Quentin Duchemin 9105790322 Remove lyrics plugin in beets causing exceptions 2022-02-01 14:48:42 +01:00
Quentin Duchemin 2484e0d429 Bump FW version 2022-02-01 14:48:04 +01:00
Quentin Duchemin 6a2f576dce Bump Lychee 2021-12-30 19:54:09 +01:00
Quentin Duchemin ca017f5349
Conflit on vault 2021-12-10 16:07:36 +01:00
Quentin Duchemin 101e91fb8c Update ARL token 2021-11-29 16:47:22 +01:00
Quentin Duchemin 9cf6ae3dda
Add ffmpeg to base packages 2021-11-07 21:14:56 +01:00
Quentin Duchemin 072cef0877
[Lychee] Bump version 2021-10-25 17:27:16 +02:00
Quentin Duchemin b9845bc660 Remove absolute bullshit 2021-10-19 20:45:04 +02:00
Quentin Duchemin f7906a649a
[FW] Add import container in DB network 2021-10-17 13:53:40 +02:00
Quentin Duchemin e5f2509c57
[FW], lol, restart policy should have quotes only when equal to no 2021-10-17 13:47:22 +02:00
Quentin Duchemin 00a7a5f317
[FW] Bump to 1.1.4 2021-10-17 13:37:24 +02:00
Quentin Duchemin 9ed4e33a7d
[FW] Use Compose module rather than shell for importing music
This is because for no reason, the shell module will just hang, never launching the container. This is also a bit cleaner...
2021-10-17 13:34:06 +02:00
Quentin Duchemin 499770ef34
Switch from deezloader to deemix 2021-10-13 17:53:07 +02:00
Quentin Duchemin 2aee78e157
Update secrets 2021-09-20 16:36:48 +02:00
Quentin Duchemin ccee2dde1d
Add unzip to base packages 2021-09-20 16:20:31 +02:00
Quentin Duchemin af3368a50c
Cannot make email work on FW, disable verification 2021-09-09 22:25:58 +02:00
Quentin Duchemin 4da23f56a4
Change secrets 2021-09-09 22:16:39 +02:00
Quentin Duchemin e2ead76a50
Change secrets 2021-09-09 21:57:37 +02:00
Quentin Duchemin f946a16bbf
Add peertube 2021-08-05 15:33:12 +02:00
Quentin Duchemin fdcad0f0e4
Who's commiting passwords ? :D 2021-08-05 15:16:22 +02:00
Quentin Duchemin 5dbe1d3036
Fix syntax error for serving static FW files 2021-08-05 15:15:32 +02:00
Quentin Duchemin ee97488ffb
Correct and synchronized timezone in system/containers 2021-07-23 14:04:42 +02:00
Quentin Duchemin 86afff5c7b
Increase external read timeout (actors from federation randomly reach it for unkwnown reason) 2021-07-19 18:10:35 +02:00
Quentin Duchemin fb6a09fb46
Bump funkwhale version and disable API rate limiting
When using federation and fetching external libraries, API is overused and this situation leads to 'false positive' rate limit. Try to disable this as nobody knows my instance (this is what you call re@l security baby)
2021-07-19 13:45:43 +02:00
Quentin Duchemin 8085f41cda
Add hugo 2021-05-27 03:38:08 +02:00
Quentin Duchemin ad4fd14ab2
Fix NC URL 2021-05-25 16:40:34 +02:00
Quentin Duchemin 45f17b687c
Re-switch min beets confidence to 90% to avoid discarding titles, prefer as-is 2021-05-12 19:21:12 +02:00
Quentin Duchemin abfa8c5c82
Remove warnings from rm 2021-05-12 19:20:08 +02:00
Quentin Duchemin 6431cf651e
Finalize tagging with beets and importing with funkwhale automatically 2021-05-12 18:45:18 +02:00
Quentin Duchemin dfe55a71a8
Simplify music roles : only installation, download and import 2021-05-12 17:15:08 +02:00
Quentin Duchemin 410ce50a59
make album download work 2021-05-12 17:13:33 +02:00
Quentin Duchemin d41d0f2d7d Add mlocate 2021-05-11 15:35:24 +02:00
Quentin Duchemin 46f92abb00 Start to add tasks to download, tag and import music 2021-05-11 13:56:51 +02:00
Quentin Duchemin 8040fe3a72
add base packages 2021-05-03 12:47:28 +02:00
Quentin Duchemin 4df23c6b9d
Switch to new VPS 2021-04-27 03:27:50 +02:00
63 changed files with 1302 additions and 390 deletions

1
.gitignore vendored
View File

@ -1 +1,2 @@
.vault_password .vault_password
albums.txt

View File

@ -2,6 +2,7 @@
``` ```
pip install -r requirements.txt pip install -r requirements.txt
ansible-galaxy install -r requirements.yml
``` ```
### Ansible Vault ### Ansible Vault
@ -11,13 +12,13 @@ To manage secrets, this repository use Ansible Vault.
Create a secret Create a secret
``` ```
ansible-vault create inv/host_vars/new.chosto.me/secrets.yml ansible-vault create inv/host_vars/chosto.me/secrets.yml
``` ```
Edit a secret Edit a secret
``` ```
ansible-vault edit inv/host_vars/new.chosto.me/secrets.yml ansible-vault edit inv/host_vars/chosto.me/secrets.yml
``` ```
### Server ### Server

74
all.yml
View File

@ -3,28 +3,58 @@
become: yes become: yes
roles: roles:
- role: base - role: base
tags: ["base"] tags: base
- role: cron - role: cron
tags: ["cron"] tags: cron
- role: ufw - role: ufw
tags: ["ufw"] tags: ufw
- role: fail2ban - role: fail2ban
tags: ["fail2ban"] tags: fail2ban
- role: "node-exporter" - role: node-exporter
tags: ["node-exporter"] tags: node-exporter
- role: "docker" - role: docker
tags: ["docker"] tags: docker
- role: "traefik" - role: traefik
tags: ["docker", "traefik"] tags:
- role: "grav" - docker
tags: ["docker", "grav"] - traefik
- role: "lychee" - role: lychee
tags: ["docker", "lychee"] tags:
- role: "web" - docker
tags: ["docker", "web"] - lychee
- role: "gitea" - role: web
tags: ["docker", "gitea"] tags:
- role: "nextcloud" - docker
tags: ["nextcloud", "docker"] - web
- role: "funkwhale" - role: gitea
tags: ["funkwhale", "docker"] tags:
- docker
- gitea
- role: nextcloud
tags:
- nextcloud
- docker
- role: music
tags:
- funwkhale
- music
- docker
- role: hugo
tags:
- docker
- hugo
- role: peertube
tags:
- docker
- peertube
- role: couchdb
tags:
- docker
- couchdb
- role: minecraft
tags:
- docker
- minecraft
- role: restic
tags:
- restic

View File

@ -3,7 +3,7 @@
# No cows because I am not a funny person # No cows because I am not a funny person
nocows = 1 nocows = 1
force_color = True force_color = True
stdout_callback = unixy #stdout_callback = unixy
# Default inventory file, override with -i # Default inventory file, override with -i
inventory = ./inv/static.yml inventory = ./inv/static.yml

View File

@ -0,0 +1,73 @@
$ANSIBLE_VAULT;1.1;AES256
32633937663864373934613132653334336361653961666636666261326163393961386465306565
6138373562366164343962616562373134366332656235320a326535343938363934303764306330
30356434353266663930373765356130313730663835656262323964353532303962356265343666
3432633637643938660a613863636664326232366539313036333933376138303030353439333961
65386466626435353966613534613330363737366436656632373037653665346137366337616133
30383332313636643534643838356537656437346331613235326264633832306262666439373237
35323337666437353764346163373833396232353839643766653132356264333034363834313638
66333839653462653137646133363639313866653531306234336236313966366230353630666230
63343766376466643635333133306265636162636135353838303734393231323063626635383939
61623237343037633239666462303337333739373130633764636336613231643061626661343234
61656337396137333435326564643463396663366561383838363339346430336662346633643865
32626331383061356332313963633136643237643065393236393332356334653761386165323937
31396263386630316334393337393061633232376337383231623566336136313263613566353234
62653433623837316463353264366462613033396535343261356437396631323730653738616234
62343164643762323566656532303638666133316566316563316233333232353661386562663362
31656262386334333034626233653363323061376537626137626536323063613233343637336634
61386565336336643763323932323362633631393365326132656266303164643331623430623062
66373435386530393532343033623930386434656139633336646636353861346332646537393633
35626230333364316133326461353838343632343537613237313463303633393066643762653933
63333765666430646339323436306161663433623465623132643062656665343234346531303061
61643866393937383436613265643934393863613437313763613765656230316265376365353866
66326664613965646231366162376465323364373033373036383036396139663632376561396432
63393138373966386535353164626539313132376537336538373866343932383537346239626439
38353931343864653935306438613562396263373536643433396234636161343162623261346139
33346536386139353163346264666533653238383562666364353162303965356430343736313333
32356365346331396363336531316135343161306565393936396262306332663639646431386265
64306565353363343162666563626462636639316661373465656363636237356135623339366637
32363830666562646132653034326635303166333732613566343633643133386433623836623635
39623936316538303261383135343738306231643935356230313032616337313364346165653238
34373364656436346334373334646263666231633461396337653630626231313338336438373166
30663738303065663566333962396263303634353966663130623237666536666538323464396339
36323634396263353362636336633735373032333666393163393762393930386530363130383432
64383766623861636165326363303937353165656363323666633138326364616333646632626233
63626266343664346331363363366533323466313935356564616432366533313035653264343263
33346137353462653038633937353732333461646535383262633966353339656365383666363839
35303566393863353637333934313135626165643032356161313839353665313237383432656231
33373064666161613762393036663835336361623464386537393361326337356230316232633964
35306262643830353066366264323764373063373038323762346265666530613166373031333936
34353164333239633836653661613664313364343061613663623663376264303364303137333966
32613064336661356138393862653532656134353861383866373438653964626365636239613464
66643635316464663365633864336266306163306261393139633631346264383133616534306630
39653766613931356332306235323533653365323766356433323632343631383465393135303836
33636137393461613761396135333265393531306233633034656439633433366563623832613033
35326533316436313839663336336461613534643764663436636632656164623637356634623533
33613037376531356437333133646264643837316664663465333165623439633039356163303538
36633930316437343138353332376132636262613432623031313463623032653266346464386630
32333363366263303734643930656536616265613133663034646232666238343533613731393031
33326566326236363838623938356236353265653565626561373032303966643137623334386362
66616335326364323365313561353762303230376465306134643735323931626164386130636561
37313131656165313231393433386133613137623061383962336665653137353034343333376138
34356335303931373936336363373638323164643361343433383966613034353837656664646165
63373432653334363463356537356436616533633763353038313638653932313863643930316635
63343739316562636632383036653835646330343865356466346531386166333535306165383364
61316336333939373335633931313137363463366438323463613039666264613831383935376563
63663035313134336263376464356131306539613532316132346339393139653235353336376235
35333861386262373464326338616330393265653532643732633665303666623236373663323732
66326566623039646536313036626232636361626136616237383634653530666463333939363538
34613964333262326632333237333939373733633639363261343866313165333065643862386462
37373431636535373366333034343035326364646632336362376265356331363033383866326536
31323236363262393938613362643365303536373233646263353831663766633164653438376638
34626662626266626436336265643439313035316166633936656539623838383938656538383637
39323963633566623961653764333636393562333362313233363433313133356430313965313562
33366463623937356331306233326233653132323361356362666237353433646436653939656131
38613938666430623564393964393132626438303864343363323731333931346534356336363766
33303761353934633130656634336462626238623932623464656437383235313636316163313431
66353964326261316165613430656337333635393331656131303565356263323839346363653262
34383733626237633263613666616464653363343866353532343830656633323864323032616536
34346464663833353238393764656634353438383930666665363934323135303337626565343363
62353235643931366234303236336461646233636631373838383266633138356662633862336534
30363562366136393065333765393137356365303262626331313238303337663233316439346165
64613836376466313465653830363639326138306133656133383132633361333164396331373961
66336666353739383436

View File

@ -1,15 +1,18 @@
firewall_in_ports: firewall_in_ports:
- "80" - "80"
- "443" - "443"
- "25565"
- "{{ ssh_port }}" - "{{ ssh_port }}"
hostname: "{{ base_user_name }}" hostname: "{{ base_user_name }}"
timezone: Europe/Paris
ssh_port: "2220" ssh_port: "2220"
prometheus_server_ip: "51.178.182.35" prometheus_server_ip: "51.178.182.35"
docker_files: "/home/{{ base_user_name }}/docker" docker_files: "/home/{{ base_user_name }}/docker"
compose_version: "3.7" compose_version: "3.7"
traefik_network: proxy traefik_network: proxy
@ -17,3 +20,7 @@ traefik_network: proxy
domain_name: chosto.me domain_name: chosto.me
letsencrypt_email: quentinduchemin@tuta.io letsencrypt_email: quentinduchemin@tuta.io
# When importing to Funkwhale, you must choose a library
# Create a library, get the ID in URL and put it there
funkwhale_import_library_id: 3e772648-0ce1-4dc1-be59-39e8e6f409d6

View File

@ -1,34 +0,0 @@
$ANSIBLE_VAULT;1.1;AES256
34363462333030653462383364323934653331333861333732303365626439666666393232376139
6161356563623135646365323133326333383734383136340a643335623334363066353930303638
38653862376330353361613661383330343338633963333538623934396537356137643833663262
3431653035643063330a383634633966643133386236303064663666303935333636386532363363
33646631343761363133646635663836313832616264313134616635373230393935396330373936
36656666623631636230356665366532613230396565336136316530633432326665366135376238
34633666623063383632663333366137666265363663323264643631323463633865336635636435
35616631623532303536613064353135353034333739656432393835303839333165633135663934
31663233656137653230343036666336386361393937383636336536396539303131393133653234
30343030373863636232643635656664643561383264643465363163656131323731326361623639
31663362363337306238616564336330303462346537393336363266323031653166323366333466
36376433373663666535623864303533353837663064623432306363356638363634323831663437
31663462323666633835663831653439306438376662343762663136613532366136636661383166
33303563613436323334366532316336346635356433663766363831646336336665653365616663
32303165313935326462393833363563313235386637353761306262353733316265383133303037
35373338653931383463323533646262653066323164313939336336376262353066363339653938
62383035653333663663336364646634336563366131653665373033333365386562333966353063
36383964633561326262616439383739343736343362363264393137366662306630656364333532
63346331636266626637666264343263303534313038386263666634353330643938393236336361
34356661343334316162313030636533643064383531653836356366623432383066333033663536
32656639323030653635636265343731336531646539356261383139663261386439376237396536
62666130353038386635333265376630376165376433336436636331316531663935663339356436
35303765303031323564333232363335643235376366613931653035313035663737353937393737
66353663643735623762303234663762356136326133656338656664313637346136376266383636
36386637326430626264666362643639636533373530366337373561643335363236646237636338
62393531643663646433303233366233366536373865613331383539616238303135383665343930
33303930633533333637343634393038356235646533613766623436306666306166383632303233
38343063636236663432333336393838373637633737363865373261343965623736326433313937
34323037326362323032356232373065666639616362393536653663316439376662636431626238
32353838666535633831353538306634636562343633656663343131386462656536633663333235
38386435313336613962313665616132323431356333353861386663313562373837663966623532
65363438643666326163393761626231386331343435636562336363643733353439326230326637
61633531316335396662663539366264633034373333336638323734336364323038

View File

@ -1,6 +1,7 @@
all: all:
hosts: hosts:
new.chosto.me: chosto.me:
ansible_host: 51.159.149.245
ansible_port: 2220 ansible_port: 2220
ansible_user: chosto ansible_user: chosto
ansible_ssh_private_key_file: ~/.ssh/scaleway ansible_ssh_private_key_file: ~/.ssh/scaleway

25
music.yml 100644
View File

@ -0,0 +1,25 @@
---
- hosts: all
become: yes
tasks:
- name: Install and configure Funkwhale, deemix and beets
include_role:
name: music
tasks_from: main
apply:
tags: install
tags: install
- name: Download submitted list of albums
include_role:
name: music
tasks_from: download_music
apply:
tags: download
tags: download
- name: Import music into Funkwhale
include_role:
name: music
tasks_from: import_music
apply:
tags: import
tags: import

4
requirements.yml 100644
View File

@ -0,0 +1,4 @@
collections:
- community.general
- community.docker
- ansible.posix

View File

@ -4,3 +4,5 @@ ihl_base_apt_cache_time: 3600
ihl_base_additional_groups: [] ihl_base_additional_groups: []
ihl_base_users: [] ihl_base_users: []
ihl_base_ssh_users: [] ihl_base_ssh_users: []
timezone: Europe/Paris

View File

@ -1,15 +1,24 @@
- include: apt.yml - include_tasks:
file: apt.yml
tags: tags:
- apt - apt
- include: users.yml - include_tasks:
file: timezone.yml
tags:
- timezone
- include_tasks:
file: users.yml
tags: tags:
- users - users
- include: hostname.yml - include_tasks:
file: hostname.yml
tags: tags:
- hostname - hostname
- include: ssh.yml - include_tasks:
file: ssh.yml
tags: tags:
- ssh - ssh

View File

@ -0,0 +1,3 @@
- name: Set correct timezone
community.general.timezone:
name: "{{ timezone }}"

View File

@ -3,11 +3,13 @@ ihl_base_apt_packages:
- ca-certificates - ca-certificates
- curl - curl
- dnsutils - dnsutils
- ffmpeg
- git - git
- htop - htop
- jq - jq
- less - less
- lm-sensors - lm-sensors
- mlocate
- python3 - python3
- python3-pip - python3-pip
- python3-setuptools - python3-setuptools
@ -15,3 +17,6 @@ ihl_base_apt_packages:
- sudo - sudo
- nano - nano
- rsync - rsync
- sshfs
- tmux
- unzip

View File

@ -0,0 +1,24 @@
---
- name: Create CouchDB directory
file:
path: "{{ couchdb_folder_name }}"
state: directory
owner: "{{ base_user_name }}"
group: "{{ base_user_name }}"
mode: 0755
- name: Copy CouchDB Compose file
template:
src: docker-compose.yml.j2
dest: "{{ couchdb_folder_name }}/docker-compose.yml"
owner: "{{ base_user_name }}"
group: "{{ base_user_name }}"
mode: 0644
- name: Ensure container is up to date
community.docker.docker_compose:
project_src: "{{ couchdb_folder_name }}"
remove_orphans: yes
pull: yes
recreate: smart
state: present

View File

@ -0,0 +1,35 @@
version: "{{ compose_version }}"
networks:
proxy:
name: "{{ traefik_network }}"
volumes:
db:
name: couchdb
services:
couchdb:
image: "couchdb:{{ couchdb_version }}"
container_name: couchdb
networks:
- proxy
volumes:
- db:/opt/couchdb/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
environment:
COUCHDB_USER: "{{ couchdb_user }}"
COUCHDB_PASSWORD: "{{ couchdb_password }}"
labels:
traefik.http.routers.couchdb.entrypoints: websecure
traefik.http.routers.couchdb.rule: "Host(`{{ couchdb_subdomain }}.{{ domain_name }}`)"
traefik.http.routers.couchdb.middlewares: cors@docker
traefik.http.services.couchdb.loadbalancer.server.port: 5984
traefik.http.middlewares.cors.headers.accessControlAllowOriginList: https://tempo.agate.blue
traefik.http.middlewares.cors.headers.accessControlAllowCredentials: true
# Cannot use wildcards with creds, see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers
traefik.http.middlewares.cors.headers.accessControlAllowHeaders: "Content-Type"
traefik.http.middlewares.cors.headers.accessControlAllowMethods: GET, OPTIONS, POST, PUT, DELETE
traefik.enable: true
restart: unless-stopped

View File

@ -0,0 +1,5 @@
couchdb_version: "3.2.1"
couchdb_folder_name: "{{ docker_files }}/couchdb"
couchdb_subdomain: couchdb
couchdb_user: "couchdb"
couchdb_password: "{{ couchdb_db_password }}"

View File

@ -1,19 +0,0 @@
# use this one if you put the nginx container behind another proxy
# you will have to set some headers on this proxy as well to ensure
# everything works correctly, you can use the ones from the funkwhale_proxy.conf file
# at https://dev.funkwhale.audio/funkwhale/funkwhale/blob/develop/deploy/funkwhale_proxy.conf
# your proxy will also need to support websockets
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header X-Forwarded-Host $http_x_forwarded_host;
proxy_set_header X-Forwarded-Port $http_x_forwarded_port;
proxy_redirect off;
# websocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;

View File

@ -1,98 +0,0 @@
upstream funkwhale-api {
# depending on your setup, you may want to update this
server funkwhale_api:{{ funkwhale_api_port }};
}
# required for websocket support
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen {{ funkwhale_nginx_port }};
server_name {{ funkwhale_subdomain }}.{{ domain_name }};
# TLS
# Feel free to use your own configuration for SSL here or simply remove the
# lines and move the configuration to the previous server block if you
# don't want to run funkwhale behind https (this is not recommended)
# have a look here for let's encrypt configuration:
# https://certbot.eff.org/all-instructions/#debian-9-stretch-nginx
root {{ funkwhale_frontend }};
# If you are using S3 to host your files, remember to add your S3 URL to the
# media-src and img-src headers (e.g. img-src 'self' https://<your-S3-URL> data:)
add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' https://s3.fr-par.scw.cloud data:; font-src 'self' data:; object-src 'none'; media-src 'self' https://s3.fr-par.scw.cloud data:";
add_header Referrer-Policy "strict-origin-when-cross-origin";
location / {
include /etc/nginx/funkwhale_proxy.conf;
# this is needed if you have file import via upload enabled
client_max_body_size {{ nginx_max_body_size }};
proxy_pass http://funkwhale-api/;
}
location /front/ {
add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data:; font-src 'self' data:; object-src 'none'; media-src 'self' data:";
add_header Referrer-Policy "strict-origin-when-cross-origin";
add_header Service-Worker-Allowed "/";
add_header X-Frame-Options "ALLOW";
alias /frontend/;
expires 30d;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}
location /front/embed.html {
add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data:; font-src 'self' data:; object-src 'none'; media-src 'self' data:";
add_header Referrer-Policy "strict-origin-when-cross-origin";
add_header X-Frame-Options "ALLOW";
alias /frontend/embed.html;
expires 30d;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}
location /federation/ {
include /etc/nginx/funkwhale_proxy.conf;
proxy_pass http://funkwhale-api/federation/;
}
# You can comment this if you do not plan to use the Subsonic API
location /rest/ {
include /etc/nginx/funkwhale_proxy.conf;
proxy_pass http://funkwhale-api/api/subsonic/rest/;
}
location /.well-known/ {
include /etc/nginx/funkwhale_proxy.conf;
proxy_pass http://funkwhale-api/.well-known/;
}
location ~ /_protected/media/(.+) {
internal;
# Needed to ensure DSub auth isn't forwarded to S3/Minio, see #932
proxy_set_header Authorization "";
proxy_pass $1;
}
location /_protected/music {
# this is an internal location that is used to serve
# audio files once correct permission / authentication
# has been checked on API side
# Set this to the same value as your MUSIC_DIRECTORY_PATH setting
internal;
alias {{ funkwhale_import_music_directory }};
}
location /staticfiles/ {
# django static files
alias {{ funkwhale_static_root }}}/;
}
}

View File

@ -19,17 +19,18 @@ services:
environment: environment:
USER_UID: 1000 USER_UID: 1000
USER_GID: 1000 USER_GID: 1000
DB_TYPE: postgres # See https://docs.gitea.com/installation/install-with-docker#managing-deployments-with-environment-variables
DB_HOST: db:5432 GITEA__database__DB_TYPE: postgres
APP_NAME: {{ gitea_name }} GITEA__database__DB_HOST: db:5432
RUN_MODE: prod GITEA__database__NAME: gitea
DOMAIN: {{ gitea_subdomain }}.{{ domain_name }} GITEA__database__USER: gitea
SSH_DOMAIN: {{ gitea_subdomain }}.{{ domain_name }} GITEA__database__PASSWD: "{{ gitea_db_password }}"
ROOT_URL: https://{{ gitea_subdomain }}.{{ domain_name }} GITEA__DEFAULT__APP_NAME: {{ gitea_name }}
DISABLE_REGISTRATION: "true" GITEA__DEFAULT__RUN_MODE: prod
DB_NAME: gitea GITEA__server__DOMAIN: {{ gitea_subdomain }}.{{ domain_name }}
DB_USER: gitea GITEA__server__SSH_DOMAIN: {{ gitea_subdomain }}.{{ domain_name }}
DB_PASSWD: "{{ gitea_db_password }}" GITEA__server__ROOT_URL: https://{{ gitea_subdomain }}.{{ domain_name }}
GITEA__service__DISABLE_REGISTRATION: "true"
networks: networks:
- proxy - proxy
- db - db

View File

@ -1,4 +1,4 @@
gitea_version: "1.14.1" gitea_version: "1.20"
gitea_folder_name: "{{ docker_files }}/gitea" gitea_folder_name: "{{ docker_files }}/gitea"
postgres_version: "13" postgres_version: "13"
gitea_name: Chostea gitea_name: Chostea

View File

@ -1,7 +0,0 @@
#!/bin/sh
set -e
echo "Launching supercronic..."
supercronic /var/www/crontab &
echo "Launching Grav..."
exec $@

View File

@ -1,41 +0,0 @@
---
- name: Create Grav directory
file:
path: "{{ grav_folder_name }}"
state: directory
owner: "{{ base_user_name }}"
group: "{{ base_user_name }}"
mode: 0755
- name: Copy Grav templates
template:
src: "{{ item }}"
# Remove .j2 extension
dest: "{{ grav_folder_name }}/{{ (item | splitext)[0] }}"
owner: "{{ base_user_name }}"
group: "{{ base_user_name }}"
mode: 0644
loop:
- docker-compose.yml.j2
- Dockerfile.j2
- name: Copy Grav entrypoint
copy:
src: entrypoint.sh
dest: "{{ grav_folder_name }}/entrypoint.sh"
owner: "{{ base_user_name }}"
group: "{{ base_user_name }}"
mode: 0644
- name: Build Grav
community.docker.docker_compose:
project_src: "{{ grav_folder_name }}"
build: yes
- name: Ensure container is up to date
community.docker.docker_compose:
project_src: "{{ traefik_folder_name }}"
remove_orphans: yes
pull: yes
recreate: smart
state: present

View File

@ -1,92 +0,0 @@
FROM php:7.4-apache
LABEL maintainer="Andy Miller <rhuk@getgrav.org> (@rhukster)"
# Enable Apache Rewrite + Expires Module
RUN a2enmod rewrite expires && \
sed -i 's/ServerTokens OS/ServerTokens ProductOnly/g' \
/etc/apache2/conf-available/security.conf
# Install dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
unzip \
libfreetype6-dev \
libjpeg62-turbo-dev \
libpng-dev \
libyaml-dev \
libzip4 \
libzip-dev \
zlib1g-dev \
libicu-dev \
g++ \
git \
cron \
vim \
&& docker-php-ext-install opcache \
&& docker-php-ext-configure intl \
&& docker-php-ext-install intl \
&& docker-php-ext-configure gd --with-freetype=/usr/include/ --with-jpeg=/usr/include/ \
&& docker-php-ext-install -j$(nproc) gd \
&& docker-php-ext-install zip \
&& rm -rf /var/lib/apt/lists/*
# set recommended PHP.ini settings
# see https://secure.php.net/manual/en/opcache.installation.php
RUN { \
echo 'opcache.memory_consumption=128'; \
echo 'opcache.interned_strings_buffer=8'; \
echo 'opcache.max_accelerated_files=4000'; \
echo 'opcache.revalidate_freq=2'; \
echo 'opcache.fast_shutdown=1'; \
echo 'opcache.enable_cli=1'; \
echo 'upload_max_filesize=128M'; \
echo 'post_max_size=128M'; \
echo 'expose_php=off'; \
} > /usr/local/etc/php/conf.d/php-recommended.ini
RUN pecl install apcu \
&& pecl install yaml-2.0.4 \
&& docker-php-ext-enable apcu yaml
# Install Supercronic
ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.12/supercronic-linux-amd64 \
SUPERCRONIC=supercronic-linux-amd64 \
SUPERCRONIC_SHA1SUM=048b95b48b708983effb2e5c935a1ef8483d9e3e
RUN curl -fsSLO "$SUPERCRONIC_URL" \
&& echo "${SUPERCRONIC_SHA1SUM} ${SUPERCRONIC}" | sha1sum -c - \
&& chmod +x "$SUPERCRONIC" \
&& mv "$SUPERCRONIC" "/usr/local/bin/${SUPERCRONIC}" \
&& ln -s "/usr/local/bin/${SUPERCRONIC}" /usr/local/bin/supercronic
# Create cron job for Grav maintenance scripts
RUN echo "*/30 * * * * cd /var/www/html;/usr/local/bin/php bin/grav scheduler 1>> /dev/null 2>&1" > /var/www/crontab
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
RUN sed -i s/80/{{ grav_internal_port }}/g /etc/apache2/sites-enabled/000-default.conf /etc/apache2/ports.conf
# Set user to www-data
RUN chown www-data:www-data /var/www
USER www-data
# Set Grav version
ARG GRAV_VERSION={{ grav_version }}
# Install grav
WORKDIR /var/www
RUN curl -o grav-admin.zip -SL https://getgrav.org/download/core/grav-admin/${GRAV_VERSION} && \
unzip grav-admin.zip && \
mv -T /var/www/grav-admin /var/www/html && \
rm grav-admin.zip
# Install plugins
RUN cd html && \
bin/gpm install admin
# provide container inside image for data persistance
VOLUME ["/var/www/html"]
ENTRYPOINT ["/entrypoint.sh"]
CMD ["apache2-foreground"]

View File

@ -1,25 +0,0 @@
version: "{{ compose_version }}"
networks:
proxy:
name: "{{ traefik_network }}"
volumes:
grav_lola:
name: grav_lola
services:
grav_lola:
image: grav:{{ grav_version }}
build: .
container_name: grav_lola
volumes:
- grav_lola:/var/www/html
networks:
- proxy
labels:
traefik.http.routers.grav.entrypoints: websecure
traefik.http.routers.grav.rule: "Host(`blog.leaula.me`)"
traefik.http.services.grav.loadbalancer.server.port: "{{ grav_internal_port }}"
traefik.enable: true
restart: unless-stopped

View File

@ -1,3 +0,0 @@
grav_internal_port: 8080
grav_version: 1.7.13
grav_folder_name: "{{ docker_files }}/grav"

View File

@ -0,0 +1,37 @@
---
- name: Create Hugo directory
file:
path: "{{ hugo_folder_name }}"
state: directory
owner: "{{ base_user_name }}"
group: "{{ base_user_name }}"
mode: 0755
- name: Copy Hugo Compose file
template:
src: docker-compose.yml.j2
# Remove .j2 extension
dest: "{{ hugo_folder_name }}/docker-compose.yml"
owner: "{{ base_user_name }}"
group: "{{ base_user_name }}"
mode: 0644
- name: Clone blog
ansible.builtin.git:
repo: "{{ repository_url }}"
dest: "{{ hugo_website }}"
force: yes
ignore_errors: yes
- name: Pull new stuff
shell:
cmd: git pull
chdir: "{{ hugo_website }}"
- name: Ensure container is up to date
community.docker.docker_compose:
project_src: "{{ hugo_folder_name }}"
remove_orphans: yes
pull: yes
recreate: smart
state: present

View File

@ -0,0 +1,54 @@
version: "{{ compose_version }}"
networks:
proxy:
name: "{{ traefik_network }}"
# Use a bind mount for Hugo data, easier to pull new versions of blog
volumes:
website_files:
driver: local
driver_opts:
type: none
device: "{{ hugo_website }}"
o: bind
website_public:
driver: local
driver_opts:
type: none
device: "{{ hugo_website }}/public"
o: bind
services:
builder:
container_name: hugo_builder
image: "klakegg/hugo:{{ hugo_version }}"
volumes:
- website_files:/src
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
# include content with publishdate in the future
command: --buildFuture
# Hugo will build only
# when triggered
restart: on-failure
front:
container_name: hugo_front
image: nginx:alpine
volumes:
- website_public:/usr/share/nginx/html:ro
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
labels:
traefik.http.routers.hugo.entrypoints: websecure
traefik.http.routers.hugo.rule: "Host(`blog.{{ domain_name }}`)"
traefik.http.services.hugo.loadbalancer.server.port: 80
traefik.enable: true
networks:
- proxy
read_only: true
tmpfs:
- /var/cache/nginx
- /run
restart: unless-stopped

View File

@ -0,0 +1,6 @@
hugo_folder_name: "{{ docker_files }}/hugo"
hugo_website: "{{ hugo_folder_name }}/website"
# Use extended edition with Git inside
# to read git info (useful for lastmod)
hugo_version: 0.105.0-ext-alpine
repository_url: https://git.chosto.me/Chosto/blog.git

View File

@ -18,6 +18,8 @@ services:
image: "lycheeorg/lychee:{{ lychee_version }}" image: "lycheeorg/lychee:{{ lychee_version }}"
volumes: volumes:
- uploads:/uploads - uploads:/uploads
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
labels: labels:
traefik.http.routers.lychee.entrypoints: websecure traefik.http.routers.lychee.entrypoints: websecure
traefik.http.routers.lychee.rule: "Host(`pic.{{ domain_name }}`)" traefik.http.routers.lychee.rule: "Host(`pic.{{ domain_name }}`)"
@ -36,7 +38,9 @@ services:
APP_NAME: Lychee APP_NAME: Lychee
APP_ENV: production APP_ENV: production
APP_DEBUG: "false" APP_DEBUG: "false"
APP_URL: "https://pic.{{ domain_name }}"
STARTUP_DELAY: 5 STARTUP_DELAY: 5
TRUSTED_PROXIES: "*"
networks: networks:
- proxy - proxy
- db - db
@ -47,6 +51,7 @@ services:
container_name: lychee_db container_name: lychee_db
volumes: volumes:
- db:/var/lib/postgresql/data - db:/var/lib/postgresql/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro - /etc/localtime:/etc/localtime:ro
environment: environment:
POSTGRES_USER: lychee POSTGRES_USER: lychee

View File

@ -1,3 +1,3 @@
lychee_folder_name: "{{ docker_files }}/lychee" lychee_folder_name: "{{ docker_files }}/lychee"
lychee_version: v4.3.0 lychee_version: v5.5.1
postgres_version: 13 postgres_version: 13

View File

@ -0,0 +1,26 @@
---
- name: Create minecraft directory
file:
path: "{{ minecraft_folder_name }}"
state: directory
owner: "{{ base_user_name }}"
group: "{{ base_user_name }}"
mode: 0755
- name: Copy minecraft Compose file
template:
src: docker-compose.yml.j2
# Remove .j2 extension
dest: "{{ minecraft_folder_name }}/docker-compose.yml"
owner: "{{ base_user_name }}"
group: "{{ base_user_name }}"
mode: 0644
- name: Ensure container is up to date
community.docker.docker_compose:
project_src: "{{ minecraft_folder_name }}"
remove_orphans: yes
pull: yes
recreate: smart
state: present
stopped: true

View File

@ -0,0 +1,25 @@
version: "{{ compose_version }}"
volumes:
data:
name: minecraft_data
services:
lychee:
container_name: minecraft
image: "itzg/minecraft-server"
environment:
TYPE: "PAPER"
EULA: "TRUE"
SNOOPER_ENABLED: "false"
DIFFICULTY: "normal"
MOTD: "Le gentil serveur de Momo Pierre et Quentin"
WHITELIST: "Joyau,MissPlumelle,XxGasKanxX"
# Ultra SetHome,ActionBar
SPIGET_RESOURCES: 96934,2661
ports:
- "25565:25565"
volumes:
- data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro

View File

@ -0,0 +1 @@
minecraft_folder_name: "{{ docker_files }}/minecraft"

View File

@ -0,0 +1,13 @@
# global proxy conf
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
proxy_redirect off;
# websocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;

View File

@ -0,0 +1,38 @@
- name: Update deemix configuration file
template:
src: deemix_config.json.j2
dest: "{{ deemix_folder_path }}/config/config.json"
owner: "{{ base_user_name }}"
group: "{{ base_user_name }}"
mode: 0644
- name: Update ARL token file
template:
src: arl.j2
dest: "{{ deemix_folder_path }}/config/.arl"
owner: "{{ base_user_name }}"
group: "{{ base_user_name }}"
mode: 0644
- name: Filter non-existing albums
shell:
chdir: ~/documents/code/funkwhale-playlist-import
cmd: ./exclude-existing-albums.py -s deezer > /tmp/unique_albums.txt
stdin: "{{ lookup('file', 'files/albums.txt') }}"
register: unique_albums
delegate_to: localhost
become: false
- name: Download required albums
# So that files are written with base user perms
become: true
become_user: "{{ base_user_name }}"
shell:
cmd: "deemix --portable -p {{ deemix_songs_path }} {{ item }}"
chdir: "{{ deemix_folder_path }}"
with_items: "{{ lookup('file', '/tmp/unique_albums.txt').splitlines() }}"
register: output_deemix
- name: Show download state
debug:
msg: "{{ output_deemix }}"

View File

@ -0,0 +1,33 @@
- name: Update beets configuration file
template:
src: beets_config.yaml.j2
dest: "{{ beets_config_folder }}/config.yaml"
owner: "{{ base_user_name }}"
group: "{{ base_user_name }}"
mode: 0644
- name: Make sure logs are writable
file:
path: "{{ beets_log_file }}"
state: touch
owner: "{{ base_user_name }}"
group: "{{ base_user_name }}"
mode: 0644
- name: Tag music (auto-tag when confidence > 90%, use as-is otherwise)
# So that files are written with user perms
become: yes
become_user: "{{ base_user_name }}"
shell:
# Quiet mode = do not ask anything to the user
# Default are in configuration file
cmd: "beet -c {{ beets_config_folder }}/config.yaml import -q {{ deemix_songs_path }}"
- name: Import music into Funkwhale
shell:
cmd: "docker-compose exec -T api funkwhale-manage import_files {{ funkwhale_import_library_id }} {{ funkwhale_import_music_directory }} --recursive --noinput --prune"
chdir: "{{ funkwhale_folder_name }}"
- name: Delete files once imported
shell:
cmd: "rm -rf {{ funkwhale_import_music_directory_host }}/*"

View File

@ -1,4 +1,28 @@
--- ---
- name: Install deemix and beets
pip:
name: "{{ item }}"
state: present
loop:
- deemix
- beets
- pexpect
- pylast
- name: Create deemix and beets directories
file:
path: "{{ item }}"
state: directory
owner: "{{ base_user_name }}"
group: "{{ base_user_name }}"
mode: 0755
recurse: yes
loop:
- "{{ deemix_folder_path }}"
- "{{ deemix_folder_path }}/config"
- "{{ deemix_songs_path }}"
- "{{ beets_config_folder }}"
- name: Create Funkwhale directory - name: Create Funkwhale directory
file: file:
path: "{{ funkwhale_folder_name }}" path: "{{ funkwhale_folder_name }}"
@ -7,7 +31,7 @@
group: "{{ base_user_name }}" group: "{{ base_user_name }}"
mode: 0755 mode: 0755
- name: Copy Traefik templates (nginx conf and Compose) - name: Copy Funkwhale templates (nginx conf and Compose)
template: template:
src: "{{ item }}" src: "{{ item }}"
# Remove .j2 extension # Remove .j2 extension
@ -18,7 +42,6 @@
loop: loop:
- docker-compose.yml.j2 - docker-compose.yml.j2
- conf.env.j2 - conf.env.j2
- nginx.conf.j2
- name: Copy nginx proxy file - name: Copy nginx proxy file
copy: copy:

View File

@ -0,0 +1 @@
{{ arl_deezer_token }}

View File

@ -0,0 +1,45 @@
directory: {{ funkwhale_import_music_directory_host }}
threaded: yes
plugins: ftintitle embedart duplicates fetchart lastgenre acousticbrainz
match:
# Allow 90% confidence for auto-tagging
strong_rec_thresh: 0.10
max_rec:
media: strong
label: strong
year: strong
preferred:
# I have only a few physical CD
media: ['Digital Media']
discogs:
user_token: {{ discogs_user_token }}
acoustid:
apikey: {{ acoustid_api_key }}
ui:
color: yes
import:
# Always move files to Funkwhale import directory
move: yes
# Previous import interrupted, start for the begining
# Should not really change because files are deleted after import
resume: no
# Merge albums if they look the same
duplicate_action: merge
# Use-as-is if no release found (could be then added to MusicBrainz)
# Reasonable because Deezer metadata is good enough in most cases
quiet_fallback: asis
log: {{ beets_log_file }}
# Preferred languages for aliases (in case of foreign artist with another alphabet for example)
languages: fr en
lastgenre:
canonical: yes
count: 10
force: yes
source: album

View File

@ -4,9 +4,9 @@ FUNKWHALE_WEB_WORKERS=4
FUNKWHALE_HOSTNAME={{ funkwhale_subdomain }}.{{ domain_name }} FUNKWHALE_HOSTNAME={{ funkwhale_subdomain }}.{{ domain_name }}
FUNKWHALE_PROTOCOL=https FUNKWHALE_PROTOCOL=https
EMAIL_CONFIG=smtp+tls://{{ funkwhale_subdomain }}@{{ domain_name }}:mD32H&Y2X$9XPFQtS!tq@mail.gandi.net:587 EMAIL_CONFIG=smtp+tls://{{ funkwhale_email_user }}:{{ funkwhale_mail_password }}@ssl0.ovh.net:587
DEFAULT_FROM_EMAIL={{ funkwhale_subdomain }}@{{ domain_name }} DEFAULT_FROM_EMAIL={{ funkwhale_email_user }}
ACCOUNT_EMAIL_VERIFICATION_ENFORCE=false
DATABASE_URL=postgresql://funkwhale:{{ funkwhale_db_password }}@funkwhale_postgres:5432/funkwhale DATABASE_URL=postgresql://funkwhale:{{ funkwhale_db_password }}@funkwhale_postgres:5432/funkwhale
REVERSE_PROXY_TYPE=nginx REVERSE_PROXY_TYPE=nginx
@ -15,7 +15,8 @@ CACHE_URL=redis://funkwhale_redis:6379/0
STATIC_ROOT={{ funkwhale_static_root }} STATIC_ROOT={{ funkwhale_static_root }}
MUSIC_DIRECTORY_PATH={{ funkwhale_import_music_directory }} MUSIC_DIRECTORY_PATH={{ funkwhale_import_music_directory }}
FUNKWHALE_FRONTEND_PATH={{ funkwhale_frontend }} # Dummy value for front container ; we have S3
MEDIA_ROOT=/media
DJANGO_SETTINGS_MODULE=config.settings.production DJANGO_SETTINGS_MODULE=config.settings.production
DJANGO_SECRET_KEY={{ funkwhale_secret_key }} DJANGO_SECRET_KEY={{ funkwhale_secret_key }}
@ -25,5 +26,19 @@ NGINX_MAX_BODY_SIZE={{ nginx_max_body_size}}
AWS_ACCESS_KEY_ID={{ scaleway_s3_id }} AWS_ACCESS_KEY_ID={{ scaleway_s3_id }}
AWS_SECRET_ACCESS_KEY={{ scaleway_s3_key }} AWS_SECRET_ACCESS_KEY={{ scaleway_s3_key }}
AWS_STORAGE_BUCKET_NAME=celiglyphe AWS_STORAGE_BUCKET_NAME=celiglyphe
# URL used to make changes
AWS_S3_ENDPOINT_URL=https://s3.fr-par.scw.cloud AWS_S3_ENDPOINT_URL=https://s3.fr-par.scw.cloud
# Base URL used to construct listening URLs (acts like a CDN, see Scaleway Edge Services)
# ⚠️ Scheme is https by default + no trailing slash
AWS_S3_CUSTOM_DOMAIN=files.chosto.me
AWS_S3_REGION_NAME=fr-par
# My bucket is public
AWS_QUERYSTRING_AUTH=false
AWS_DEFAULT_ACL=public-read
PROXY_MEDIA=false PROXY_MEDIA=false
EXTERNAL_MEDIA_PROXY_ENABLED=false
THROTTLING_ENABLED=false
EXTERNAL_REQUESTS_TIMEOUT=120
NGINX_MAX_BODY_SIZE=500M

View File

@ -0,0 +1,78 @@
{
"downloadLocation": "{{ deemix_songs_path }}",
"tracknameTemplate": "%artist% - %title%",
"albumTracknameTemplate": "%tracknumber% - %title%",
"playlistTracknameTemplate": "%position% - %artist% - %title%",
"createPlaylistFolder": true,
"playlistNameTemplate": "%playlist%",
"createArtistFolder": false,
"artistNameTemplate": "%artist%",
"createAlbumFolder": true,
"albumNameTemplate": "%artist% - %album%",
"createCDFolder": true,
"createStructurePlaylist": false,
"createSingleFolder": false,
"padTracks": true,
"paddingSize": "0",
"illegalCharacterReplacer": "_",
"queueConcurrency": 3,
"maxBitrate": "3",
"fallbackBitrate": true,
"fallbackSearch": false,
"logErrors": true,
"logSearched": false,
"overwriteFile": "n",
"createM3U8File": false,
"playlistFilenameTemplate": "playlist",
"syncedLyrics": false,
"embeddedArtworkSize": 800,
"embeddedArtworkPNG": false,
"localArtworkSize": 1400,
"localArtworkFormat": "jpg",
"saveArtwork": true,
"coverImageTemplate": "cover",
"saveArtworkArtist": false,
"artistImageTemplate": "folder",
"jpegImageQuality": 80,
"dateFormat": "Y-M-D",
"albumVariousArtists": true,
"removeAlbumVersion": false,
"removeDuplicateArtists": false,
"featuredToTitle": "0",
"titleCasing": "nothing",
"artistCasing": "nothing",
"executeCommand": "",
"tags": {
"title": true,
"artist": true,
"album": true,
"cover": true,
"trackNumber": true,
"trackTotal": false,
"discNumber": true,
"discTotal": false,
"albumArtist": true,
"genre": true,
"year": true,
"date": true,
"explicit": false,
"isrc": true,
"length": true,
"barcode": true,
"bpm": true,
"replayGain": false,
"label": true,
"lyrics": false,
"syncedLyrics": false,
"copyright": false,
"composer": false,
"involvedPeople": false,
"source": false,
"rating": false,
"savePlaylistAsCompilation": false,
"useNullSeparator": false,
"saveID3v1": true,
"multiArtistSeparator": "default",
"singleAlbumArtist": false,
"coverDescriptionUTF8": false
}

View File

@ -11,14 +11,12 @@ volumes:
name: funkwhale_redis name: funkwhale_redis
db: db:
name: funkwhale_db name: funkwhale_db
frontend:
name: funkwhale_frontend
static: static:
name: funkwhale_static name: funkwhale_static
services: services:
celeryworker: celeryworker:
image: "funkwhale/funkwhale:{{ funkwhale_version }}" image: "funkwhale/api:{{ funkwhale_version }}"
container_name: funkwhale_celeryworker container_name: funkwhale_celeryworker
env_file: env_file:
- ./conf.env - ./conf.env
@ -26,29 +24,32 @@ services:
- C_FORCE_ROOT=true - C_FORCE_ROOT=true
volumes: volumes:
- "{{ funkwhale_import_music_directory_host }}:{{ funkwhale_import_music_directory }}:ro" - "{{ funkwhale_import_music_directory_host }}:{{ funkwhale_import_music_directory }}:ro"
command: celery -A funkwhale_api.taskapp worker -l INFO - /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
command: celery -A funkwhale_api.taskapp worker -l INFO --concurrency=10
networks: networks:
- db - db
restart: unless-stopped restart: unless-stopped
celerybeat: celerybeat:
image: "funkwhale/funkwhale:{{ funkwhale_version }}" image: "funkwhale/api:{{ funkwhale_version }}"
container_name: funkwhale_celerybeat container_name: funkwhale_celerybeat
env_file: ./conf.env env_file: ./conf.env
command: celery -A funkwhale_api.taskapp beat --pidfile= -l INFO command: celery -A funkwhale_api.taskapp beat -l INFO
networks: networks:
- db - db
restart: unless-stopped restart: unless-stopped
api: api:
image: "funkwhale/funkwhale:{{ funkwhale_version }}" image: "funkwhale/api:{{ funkwhale_version }}"
container_name: funkwhale_api container_name: funkwhale_api
env_file: env_file:
- ./conf.env - ./conf.env
volumes: volumes:
- "{{ funkwhale_import_music_directory_host }}:{{ funkwhale_import_music_directory }}:ro" - "{{ funkwhale_import_music_directory_host }}:{{ funkwhale_import_music_directory }}:ro"
- "static:{{ funkwhale_static_root }}" - "static:{{ funkwhale_static_root }}"
- "frontend:{{ funkwhale_frontend }}" - /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
labels: labels:
traefik.http.routers.funkwhale_api.entrypoints: websecure traefik.http.routers.funkwhale_api.entrypoints: websecure
traefik.http.routers.funkwhale_api.rule: "Host(`api.{{ funkwhale_subdomain }}.{{ domain_name }}`)" traefik.http.routers.funkwhale_api.rule: "Host(`api.{{ funkwhale_subdomain }}.{{ domain_name }}`)"
@ -59,16 +60,15 @@ services:
- db - db
restart: unless-stopped restart: unless-stopped
nginx: front:
image: nginx image: funkwhale/front:{{ funkwhale_version }}
container_name: funkwhale_nginx container_name: funkwhale_front
env_file: ./conf.env env_file: ./conf.env
volumes: volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
- ./funkwhale_proxy.conf:/etc/nginx/funkwhale_proxy.conf:ro
- "{{ funkwhale_import_music_directory_host }}:{{ funkwhale_import_music_directory }}:ro" - "{{ funkwhale_import_music_directory_host }}:{{ funkwhale_import_music_directory }}:ro"
- "static:{{ funkwhale_static_root }}" - "static:/usr/share/nginx/html/staticfiles:ro"
- "frontend:{{ funkwhale_frontend }}" - /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
labels: labels:
traefik.http.routers.funkwhale.entrypoints: websecure traefik.http.routers.funkwhale.entrypoints: websecure
traefik.http.routers.funkwhale.rule: "Host(`{{ funkwhale_subdomain }}.{{ domain_name }}`)" traefik.http.routers.funkwhale.rule: "Host(`{{ funkwhale_subdomain }}.{{ domain_name }}`)"
@ -84,6 +84,8 @@ services:
env_file: ./conf.env env_file: ./conf.env
volumes: volumes:
- redis:/data - redis:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks: networks:
- db - db
restart: unless-stopped restart: unless-stopped
@ -95,6 +97,10 @@ services:
POSTGRES_USER: funkwhale POSTGRES_USER: funkwhale
POSTGRES_DB: funkwhale POSTGRES_DB: funkwhale
POSTGRES_PASSWORD: "{{ funkwhale_db_password }}" POSTGRES_PASSWORD: "{{ funkwhale_db_password }}"
TZ: Europe/Paris
PGTZ: Europe/Paris
# Don't mount /etc/localtime, it screws with pg_timezone_names
# TZ and PGTZ environment are sufficient.
volumes: volumes:
- db:/var/lib/postgresql/data - db:/var/lib/postgresql/data
networks: networks:

View File

@ -1,12 +1,15 @@
funkwhale_version: 1.1.1 funkwhale_version: 1.4.0
funkwhale_api_port: 5000 funkwhale_api_port: 5000
funkwhale_nginx_port: 80 funkwhale_nginx_port: 80
funkwhale_static_root: /static funkwhale_static_root: /static
funkwhale_import_music_directory: /import funkwhale_import_music_directory: /import
funkwhale_import_music_directory_host: "{{ funkwhale_folder_name }}/import" funkwhale_import_music_directory_host: "{{ funkwhale_folder_name }}/import"
funkwhale_folder_name: "{{ docker_files }}/funkwhale" funkwhale_folder_name: "{{ docker_files }}/funkwhale"
funkwhale_frontend: /frontend
funkwhale_subdomain: music funkwhale_subdomain: music
nginx_max_body_size: 100M nginx_max_body_size: 100M
postgres_version: 13 postgres_version: 15
redis_version: 6 redis_version: 6
deemix_folder_path: /home/{{ base_user_name }}/deemix
deemix_songs_path: "{{ deemix_folder_path }}/songs"
beets_config_folder: "/home/{{ base_user_name }}/.config/beets"
beets_log_file: "/var/log/beets.log"

View File

@ -23,3 +23,4 @@
pull: yes pull: yes
recreate: smart recreate: smart
state: present state: present
ignore_errors: yes

View File

@ -15,11 +15,16 @@ services:
container_name: nextcloud container_name: nextcloud
volumes: volumes:
- nextcloud:/var/www/html - nextcloud:/var/www/html
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
labels: labels:
traefik.http.routers.cloud.entrypoints: websecure traefik.http.routers.cloud.entrypoints: websecure
traefik.http.routers.cloud.rule: "Host(`cloud1.{{ domain_name }}`)" traefik.http.routers.cloud.rule: "Host(`cloud.{{ domain_name }}`)"
traefik.http.services.cloud.loadbalancer.server.port: 80 traefik.http.services.cloud.loadbalancer.server.port: 80
traefik.enable: true traefik.enable: true
environment:
OVERWRITECLIURL: https://cloud.chosto.me
OVERWRITEPROTOCOL: https
networks: networks:
- proxy - proxy
restart: unless-stopped restart: unless-stopped

View File

@ -1,2 +1,2 @@
nextcloud_version: 21 nextcloud_version: 25
nextcloud_folder_name: "{{ docker_files }}/nextcloud" nextcloud_folder_name: "{{ docker_files }}/nextcloud"

View File

@ -0,0 +1,29 @@
---
- name: Create Peertube directory
file:
path: "{{ peertube_folder_name }}"
state: directory
owner: "{{ base_user_name }}"
group: "{{ base_user_name }}"
mode: 0755
- name: Copy Peertube templates (env file and Compose)
template:
src: "{{ item }}"
# Remove .j2 extension
dest: "{{ peertube_folder_name }}/{{ (item | splitext)[0] }}"
owner: "{{ base_user_name }}"
group: "{{ base_user_name }}"
mode: 0644
loop:
- docker-compose.yml.j2
- conf.env.j2
- peertube.conf.j2
- name: Ensure container is up to date
community.docker.docker_compose:
project_src: "{{ peertube_folder_name }}"
remove_orphans: yes
pull: yes
recreate: smart
state: present

View File

@ -0,0 +1,36 @@
# Database / Postgres service configuration
POSTGRES_USER=peertube
POSTGRES_PASSWORD={{ peertube_db_password }}
# Postgres database name "peertube"
POSTGRES_DB=peertube
PEERTUBE_DB_USERNAME=peertube
PEERTUBE_DB_PASSWORD={{ peertube_db_password }}
PEERTUBE_DB_SSL=false
PEERTUBE_DB_HOSTNAME={{ peertube_db_container_name }}
PEERTUBE_SECRET={{ peertube_secret }}
# Server configuration
PEERTUBE_WEBSERVER_HOSTNAME={{ peertube_subdomain }}.{{ domain_name }}
PEERTUBE_WEBSERVER_PORT=9000
PEERTUBE_WEBSERVER_HTTPS=false
# If you need more than one IP as trust_proxy
# pass them as a comma separated array:
PEERTUBE_TRUST_PROXY=["127.0.0.1", "loopback", "172.18.0.0/16"]
# E-mail configuration
# If you use a Custom SMTP server
PEERTUBE_SMTP_USERNAME={{ peertube_subdomain }}
PEERTUBE_SMTP_PASSWORD={{ peertube_mail_password }}
PEERTUBE_SMTP_HOSTNAME=mail.gandi.net
PEERTUBE_SMTP_PORT=587
PEERTUBE_SMTP_FROM={{ peertube_subdomain }}@{{ domain_name }}
PEERTUBE_SMTP_TLS=true
PEERTUBE_SMTP_DISABLE_STARTTLS=false
PEERTUBE_ADMIN_EMAIL=quentinduchemin@tuta.io
# /!\ Prefer to use the PeerTube admin interface to set the following configurations /!\
#PEERTUBE_SIGNUP_ENABLED=true
#PEERTUBE_TRANSCODING_ENABLED=true
#PEERTUBE_CONTACT_FORM_ENABLED=true
PEERTUBE_REDIS_HOSTNAME={{ peertube_redis_container }}

View File

@ -0,0 +1,75 @@
version: "{{ compose_version }}"
networks:
proxy:
name: "{{ traefik_network }}"
db:
name: peertube_db
redis:
name: peertube_redis
volumes:
db:
name: peertube_db
assets:
name: peertube_assets
redis:
name: peertube_redis
data:
name: peertube_data
config:
name: peertube_config
services:
# You can comment this webserver section if you want to use another webserver/proxy or test PeerTube in local
webserver:
image: chocobozzz/peertube-webserver:latest
volumes:
- ./peertube.conf:/etc/nginx/conf.d/peertube.template
- assets:/var/www/peertube/peertube-latest/client/dist:ro
- data:/var/www/peertube/storage
env_file: conf.env
labels:
traefik.http.routers.peertube.entrypoints: websecure
traefik.http.routers.peertube.rule: "Host(`{{ peertube_subdomain }}.{{ domain_name }}`)"
traefik.http.services.peertube.loadbalancer.server.port: 80
traefik.enable: true
networks:
- proxy
restart: unless-stopped
app:
image: "chocobozzz/peertube:{{ peertube_version }}-bookworm"
container_name: peertube
networks:
- proxy
- db
- redis
volumes:
- assets:/app/client/dist
- data:/data
- config:/config
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
env_file: conf.env
restart: unless-stopped
db:
image: "postgres:{{ postgres_version }}"
container_name: "{{ peertube_db_container_name }}"
env_file: conf.env
volumes:
- db:/var/lib/postgresql/data
networks:
- db
restart: unless-stopped
redis:
image: "redis:{{ redis_version }}"
container_name: "{{ peertube_redis_container }}"
volumes:
- redis:/data
networks:
- db
restart: unless-stopped

View File

@ -0,0 +1,213 @@
# Minimum Nginx version required: 1.13.0 (released Apr 25, 2017)
# Please check your Nginx installation features the following modules via 'nginx -V':
# STANDARD HTTP MODULES: Core, Proxy, Rewrite, Access, Gzip, Headers, HTTP/2, Log, Real IP, SSL, Thread Pool, Upstream, AIO Multithreading.
# THIRD PARTY MODULES: None.
upstream backend {
server peertube:9000;
}
server {
listen 80;
listen [::]:80;
server_name tube.chosto.me;
##
# Application
##
location @api {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
client_max_body_size 10G; # default is 1M
proxy_connect_timeout 10m;
proxy_send_timeout 10m;
proxy_read_timeout 10m;
send_timeout 10m;
proxy_pass http://backend;
}
location / {
try_files /dev/null @api;
}
location = /api/v1/videos/upload-resumable {
client_max_body_size 0;
proxy_request_buffering off;
try_files /dev/null @api;
}
location ~ ^/api/v1/videos/(upload|([^/]+/studio/edit))$ {
limit_except POST HEAD { deny all; }
# This is the maximum upload size, which roughly matches the maximum size of a video file.
# Note that temporary space is needed equal to the total size of all concurrent uploads.
# This data gets stored in /var/lib/nginx by default, so you may want to put this directory
# on a dedicated filesystem.
client_max_body_size 12G; # default is 1M
add_header X-File-Maximum-Size 8G always; # inform backend of the set value in bytes before mime-encoding (x * 1.4 >= client_max_body_size)
try_files /dev/null @api;
}
location ~ ^/api/v1/runners/jobs/[^/]+/(update|success)$ {
client_max_body_size 12G; # default is 1M
add_header X-File-Maximum-Size 8G always; # inform backend of the set value in bytes before mime-encoding (x * 1.4 >= client_max_body_size)
try_files /dev/null @api;
}
location ~ ^/api/v1/(videos|video-playlists|video-channels|users/me) {
client_max_body_size 12G; # default is 1M
add_header X-File-Maximum-Size 12G always; # inform backend of the set value in bytes before mime-encoding (x * 1.4 >= client_max_body_size)
try_files /dev/null @api;
}
##
# Websocket
##
location @api_websocket {
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://backend;
}
location /socket.io {
try_files /dev/null @api_websocket;
}
location /tracker/socket {
# Peers send a message to the tracker every 15 minutes
# Don't close the websocket before then
proxy_read_timeout 15m; # default is 60s
try_files /dev/null @api_websocket;
}
# Plugin websocket routes
location ~ ^/plugins/[^/]+(/[^/]+)?/ws/ {
try_files /dev/null @api_websocket;
}
##
# Performance optimizations
# For extra performance please refer to https://github.com/denji/nginx-tuning
##
root /var/www/peertube/storage;
# Enable compression for JS/CSS/HTML, for improved client load times.
# It might be nice to compress JSON/XML as returned by the API, but
# leaving that out to protect against potential BREACH attack.
gzip on;
gzip_vary on;
gzip_types # text/html is always compressed by HttpGzipModule
text/css
application/javascript
font/truetype
font/opentype
application/vnd.ms-fontobject
image/svg+xml;
gzip_min_length 1000; # default is 20 bytes
gzip_buffers 16 8k;
gzip_comp_level 2; # default is 1
client_body_timeout 30s; # default is 60
client_header_timeout 10s; # default is 60
send_timeout 10s; # default is 60
keepalive_timeout 10s; # default is 75
resolver_timeout 10s; # default is 30
reset_timedout_connection on;
proxy_ignore_client_abort on;
tcp_nopush on; # send headers in one piece
tcp_nodelay on; # don't buffer data sent, good for small data bursts in real time
# If you have a small /var/lib partition, it could be interesting to store temp nginx uploads in a different place
# See https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_temp_path
#client_body_temp_path /var/www/peertube/storage/nginx/;
# Bypass PeerTube for performance reasons. Optional.
# Should be consistent with client-overrides assets list in client.ts server controller
location ~ ^/client/(assets/images/(icons/icon-36x36\.png|icons/icon-48x48\.png|icons/icon-72x72\.png|icons/icon-96x96\.png|icons/icon-144x144\.png|icons/icon-192x192\.png|icons/icon-512x512\.png|logo\.svg|favicon\.png|default-playlist\.jpg|default-avatar-account\.png|default-avatar-account-48x48\.png|default-avatar-video-channel\.png|default-avatar-video-channel-48x48\.png))$ {
add_header Cache-Control "public, max-age=31536000, immutable"; # Cache 1 year
root /var/www/peertube;
try_files /storage/client-overrides/$1 /peertube-latest/client/dist/$1 @api;
}
# Bypass PeerTube for performance reasons. Optional.
location ~ ^/client/(.*\.(js|css|png|svg|woff2|otf|ttf|woff|eot))$ {
add_header Cache-Control "public, max-age=31536000, immutable"; # Cache 1 year
alias /var/www/peertube/peertube-latest/client/dist/$1;
}
location ~ ^(/static/(webseed|web-videos|streaming-playlists)/private/)|^/download {
# We can't rate limit a try_files directive, so we need to duplicate @api
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_limit_rate 5M;
proxy_pass http://backend;
}
# Bypass PeerTube for performance reasons. Optional.
location ~ ^/static/(webseed|web-videos|redundancy|streaming-playlists)/ {
limit_rate_after 5M;
set $peertube_limit_rate 5M;
# Use this line with nginx >= 1.17.0
limit_rate $peertube_limit_rate;
# Or this line with nginx < 1.17.0
# set $limit_rate $peertube_limit_rate;
if ($request_method = 'OPTIONS') {
add_header Access-Control-Allow-Origin '*';
add_header Access-Control-Allow-Methods 'GET, OPTIONS';
add_header Access-Control-Allow-Headers 'Range,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
add_header Access-Control-Max-Age 1728000; # Preflight request can be cached 20 days
add_header Content-Type 'text/plain charset=UTF-8';
add_header Content-Length 0;
return 204;
}
if ($request_method = 'GET') {
add_header Access-Control-Allow-Origin '*';
add_header Access-Control-Allow-Methods 'GET, OPTIONS';
add_header Access-Control-Allow-Headers 'Range,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
# Don't spam access log file with byte range requests
access_log off;
}
# Enabling the sendfile directive eliminates the step of copying the data into the buffer
# and enables direct copying data from one file descriptor to another.
sendfile on;
sendfile_max_chunk 1M; # prevent one fast connection from entirely occupying the worker process. should be > 800k.
aio threads;
# web-videos is the name of the directory mapped to the `storage.web_videos` key in your PeerTube configuration
rewrite ^/static/webseed/(.*)$ /web-videos/$1 break;
rewrite ^/static/(.*)$ /$1 break;
try_files $uri @api;
}
}

View File

@ -0,0 +1,8 @@
peertube_version: "v6.0.3"
peertube_folder_name: "{{ docker_files }}/peertube"
peertube_subdomain: tube
peertube_db_container_name: "peertube_db"
postgres_version: "13"
redis_version: "6"
peertube_redis_container: "peertube_redis"
peertube_instance_name: "Babil"

View File

@ -0,0 +1,10 @@
min_cryptography_lib: 1.2.3
autorestic_base: /var/lib/autorestic
autorestic_config_path: "{{ autorestic_base }}/autorestic.yml"
autorestic_version: 1.7.7
autorestic_path: /usr/local/bin/autorestic
repository_path: /data
dbdumps_path: /dbdumps
# Default password to derive encryption key for repository (confidentiality)
restic_password: "{{ restic_password }}"

View File

@ -0,0 +1,78 @@
- name: Ensure necessary directories exists
file:
path: "{{ item }}"
state: directory
loop:
- "{{ repository_path }}"
- "{{ dbdumps_path }}"
- "{{ autorestic_base }}"
- name: Download and install restic
apt:
name: restic
update_cache: yes
- name: Install bzip2
apt:
name: bzip2
update_cache: yes
no_log: true
- name: Download autorestic
get_url:
url: "https://github.com/cupcakearmy/autorestic/releases/download/v{{ autorestic_version }}/autorestic_{{ autorestic_version }}_linux_amd64.bz2"
dest: /tmp/autorestic.bz2
- name: Extract and install autorestic executable
shell: "bzcat /tmp/autorestic.bz2 > {{ autorestic_path }}"
- name: Ensure autorestic has executable bit
file:
path: "{{ autorestic_path }}"
mode: '0755'
- name: Copy configuration
template:
src: "autorestic.yml"
dest: "{{ autorestic_config_path }}"
- name: Copy scripts
template:
src: "{{ item }}"
dest: "{{ autorestic_base }}"
mode: 0755
loop:
- backup_db.sh
- start_backup.sh
- name: Ensure scripts are executable
file:
path: "{{ autorestic_base }}/{{ item }}"
mode: 0755
loop:
- backup_db.sh
- start_backup.sh
- name: Generate systemd timer and service
template:
src: "{{ item }}"
dest: "/etc/systemd/system"
loop:
- autorestic.service
- autorestic.timer
# Remove when PR #197 is merged
- name: Initialize Restic Rest repository
shell: "RESTIC_PASSWORD='{{ restic_password }}' restic -r {{ repository_path }} init"
failed_when: false
# Waiting for PR #197 to be merged
- name: Check configuration file is correct and create repositories if needed
shell: "autorestic -c {{ autorestic_config_path }} check"
- name: Ensure timer is activated
systemd:
name: autorestic.timer
enabled: true
state: started
daemon_reload: true

View File

@ -0,0 +1,10 @@
[Unit]
Description=Backups yay
[Service]
Type=oneshot
ExecStart={{ autorestic_base }}/start_backup.sh
# fail if backup takes more than 1 day
TimeoutStartSec=86400
IPAccounting=yes
MemoryAccounting=yes

View File

@ -0,0 +1,9 @@
[Unit]
Description=Backups with autorestic
[Timer]
# Trigger the autorestic cron's check every 10 minutes
OnCalendar=*:0/10:0
[Install]
WantedBy=timers.target

View File

@ -0,0 +1,26 @@
version: 2
global:
forget:
keep-hourly: 24
keep-daily: 7
keep-weekly: 4
keep-monthly: 12
backends:
pica03:
type: local
path: {{ repository_path }}
key: {{ restic_password }}
locations:
funkwhale:
from:
- /var/lib/docker/volumes/funkwhale_static
- {{ dbdumps_path }}/funkwhale_postgres
to: pica03
cron: 0 3 * * *
forget: "yes"
hooks:
before:
- {{ autorestic_base }}/backup_db.sh funkwhale_postgres postgresql

View File

@ -0,0 +1,61 @@
#!/usr/bin/env bash
# usage: <script> <container-name> <database-type>
#
# exports the database of a running docker container in a dump in $BACKUP_DIR/$CONTAINER_NAME/
BACKUP_DIR={{ dbdumps_path }}
# Check container existence
CONTAINER="$1"
if ! docker ps | grep -q "$CONTAINER"
then
echo "The container $CONTAINER doesn't exist or doesn't run"
exit 1
fi
# Check database type
TYPE="$2"
COMMAND=""
case "$TYPE" in
postgresql)
POSTGRES_USER=$(docker exec "$CONTAINER" env | grep POSTGRES_USER | cut -d= -f2)
COMMAND="pg_dumpall -c -U $POSTGRES_USER"
EXTENSION=sql
;;
mariadb)
MARIADB_USER=$(docker exec "$CONTAINER" env | grep MYSQL_USER | cut -d= -f2)
MARIADB_PASSWORD=$(docker exec "$CONTAINER" env | grep MYSQL_PASSWORD | cut -d= -f2)
COMMAND="mysqldump -u $MARIADB_USER --password=$MARIADB_PASSWORD --all-databases"
EXTENSION=sql
;;
mongodb)
COMMAND="mongodump --archive"
EXTENSION=mongodump
;;
ldap-config)
COMMAND="slapcat -n 0"
EXTENSION=config.ldif
;;
ldap-content)
COMMAND="slapcat -n 1"
EXTENSION=content.ldif
;;
*)
echo "I don't know $TYPE database type."
exit 1
esac
# Ensure directory exists
mkdir -p "$BACKUP_DIR/$CONTAINER"
# Export database
docker exec "$CONTAINER" $COMMAND > "$BACKUP_DIR/$CONTAINER/dump.$EXTENSION"
exit $?

View File

@ -0,0 +1,25 @@
#!/usr/bin/env sh
if [ ! -f /tmp/last_autorestic_check_date ]
then
touch /tmp/last_autorestic_check_date
fi
current_date=$(date +"%D")
last_autorestic_check_date=$(cat /tmp/last_autorestic_check_date)
{{ autorestic_path }} -c {{ autorestic_config_path }} --ci exec -av -- unlock
#Check only one time a day
if [ "$current_date" != "$last_autorestic_check_date" ]
then
#todo: use exec -- check when PR #253 is merged (more verbose)
{{ autorestic_path }} -c {{ autorestic_config_path }} check
if [ $? -ne 0 ]
then
exit
fi
echo $current_date > /tmp/last_autorestic_check_date
fi
{{ autorestic_path }} -vvv -c {{ autorestic_config_path }} --ci cron

View File

@ -12,6 +12,7 @@ services:
- "{{ traefik_http_port }}:80" - "{{ traefik_http_port }}:80"
- "{{ traefik_https_port}}:443" - "{{ traefik_https_port}}:443"
volumes: volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro - /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock - /var/run/docker.sock:/var/run/docker.sock
- ./traefik.toml:/traefik.toml - ./traefik.toml:/traefik.toml
@ -32,7 +33,12 @@ services:
traefik.http.middlewares.traefik-api-auth.basicauth.users: "dashboard:{{ traefik_dashboard_htpasswd | replace("$", "$$") }}" traefik.http.middlewares.traefik-api-auth.basicauth.users: "dashboard:{{ traefik_dashboard_htpasswd | replace("$", "$$") }}"
traefik.enable: true traefik.enable: true
environment: environment:
GANDIV5_API_KEY: "{{ gandi_api_key }}" OVH_APPLICATION_KEY: "{{ ovh_app_key }}"
OVH_APPLICATION_SECRET: "{{ ovh_app_secret }}"
OVH_CONSUMER_KEY: "{{ ovh_consumer_key }}"
OVH_ENDPOINT: ovh-eu
OVH_POLLING_INTERVAL: 0
OVH_TTL: 3600
networks: networks:
- proxy - proxy
restart: unless-stopped restart: unless-stopped

View File

@ -42,7 +42,7 @@
email = "{{ letsencrypt_email }}" email = "{{ letsencrypt_email }}"
storage = "/certs/acme.json" storage = "/certs/acme.json"
[certificatesResolvers.letsencrypt.acme.dnsChallenge] [certificatesResolvers.letsencrypt.acme.dnsChallenge]
provider = "gandiv5" provider = "ovh"
delayBeforeCheck = 10 delayBeforeCheck = 10
[metrics] [metrics]

View File

@ -14,6 +14,8 @@ services:
volumes: volumes:
- {{ websites_basepath }}/{{ website.name }}:/var/www/html:ro - {{ websites_basepath }}/{{ website.name }}:/var/www/html:ro
- {{ websites_basepath }}/{{ website.name }}.conf:/etc/nginx/conf.d/default.conf:ro - {{ websites_basepath }}/{{ website.name }}.conf:/etc/nginx/conf.d/default.conf:ro
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
labels: labels:
traefik.http.routers.{{ website.name }}.entrypoints: websecure traefik.http.routers.{{ website.name }}.entrypoints: websecure
traefik.http.routers.{{ website.name }}.rule: "Host(`{{ website.name }}.{{ domain_name }}`)" traefik.http.routers.{{ website.name }}.rule: "Host(`{{ website.name }}.{{ domain_name }}`)"
@ -43,5 +45,7 @@ services:
- {{ websites_basepath }}/{{ website.name }}:/var/www/html/{{ website.name }}:ro - {{ websites_basepath }}/{{ website.name }}:/var/www/html/{{ website.name }}:ro
{% endif %} {% endif %}
{% endfor %} {% endfor %}
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
restart: unless-stopped restart: unless-stopped
{% endif %} {% endif %}