Note: Currently new registrations are closed, if you want an account Contact us

Difference between revisions of "Poddery - Diaspora, Matrix and XMPP"

From FSCI Wiki
 
(42 intermediate revisions by 6 users not shown)
Line 1: Line 1:
We run Diaspora, XMPP and Matrix services at [https://poddery.com poddery.com]. Diaspora username and password can be used to access XMPP and Matrix services. [https://chat.poddery.com chat.poddery.com] provides Riot client (accessed by a web browser), which can be used to connect to any Matrix server without installing a Riot app/client.
We run decentralized and federated [https://diasporafoundation.org/ Diaspora] social netowrk, [https://xmpp.org/ XMPP] and [https://matrix.org Matrix] instant messaging services at [https://poddery.com poddery.com]. Along with Diaspora, Poddery username and password can be used to access XMPP and Matrix services as well. [https://chat.poddery.com chat.poddery.com] provides Riot client (accessed by a web browser), which can be used to connect to any Matrix server without installing a Riot app/client.


= Environment =
= Environment =
== Hosting ==
== Hosting ==
We are on a [https://www.scaleway.com/baremetal-cloud-servers/ C2S instance of scaleway.com bare metal cloud server].
Poddery is hosted at [https://www.hetzner.com Hetzner] with the following specs:


* 4 Dedicated x86 64bit Cores
* Intel Xeon E3-1246V3 Process - 4 Cores, 3.5GHz
* 8GB Memory
* 4TB HDD
* 50GB SSD Disk
* 32GB DDR3 RAM
* 1 Flexible Public IPv4
* 300Mbit/s Unmetered bandwidth
* 2.5Gbit/s Internal bandwidth
* €11.99 Per Month


== Operating System ==
* Debian Buster
== User Visible Services ==
=== Diaspora ===
* Currently installed version is 0.7.6.1 which is available in [https://packages.debian.org/buster/diaspora-installer Debian Buster contrib]
* For live statistics see https://poddery.com/statistics


Due to performance issues we are migrating to a new server ([https://www.scaleway.com/baremetal-cloud-servers/ C2M instance of scaleway.com]) with the following specs:
=== Chat/XMPP ===
* [https://prosody.im/ Prosody] is used as the XMPP server which is modern and lightweight.
* Currently installed version is 0.11.2 which is available in [https://packages.debian.org/buster/prosody Debian Buster].
* All XEPs are enabled which the [https://conversations.im/ Conversations app] support.


* '''8''' Dedicated x86 64bit Cores
=== Chat/Matrix ===
* '''16GB''' Memory
* [https://matrix.org/docs/projects/server/synapse.html Synapse] is used as the Matrix server.
* 50GB SSD Disk
* Synapse is currently installed directly from the [https://github.com/matrix-org/synapse official GitHub repo].
* 1 Flexible Public IPv4
* Riot-web Matrix client is hosted at https://chat.poddery.com
* '''500Mbit/s''' Unmeterd bandwidth
* '''5Gbit/s''' Internal bandwidth
* '''€17.99''' Per Month
* Extra '''150GB''' SSD
* Total '''€20.99''' Per Month


== Operating System ==
=== Homepage ===
Homepage and other static pages are maintained in FSCI [https://git.fosscommunity.in GitLab instance].
* poddery.com -> https://git.fosscommunity.in/community/poddery.com
* save.poddery.com -> https://git.fosscommunity.in/community/save.poddery.com
* fund.poddery.com -> https://git.fosscommunity.in/community/fund-poddery
 
== Backend Services ==
=== Web Server / Reverse Proxy ===
* Nginx web server which also acts as front-end (reverse proxy) for Diaspora and Matrix.
 
=== Database ===
* PostgreSQL for Matrix
* MySQL for Diaspora
 
''TODO'': Consider migrating to PostgreSQL to optimize resources (We can reduce one service and RAM usage).
 
=== Email ===
* Exim
 
=== SSL/TLS certificates ===
* Let's Encrypt
 
=== Firewall ===
* UFW (Uncomplicated Firewall)
 
=== Intrusion Prevention ===
* Fail2ban
 
= Coordination =
* [https://codema.in/g/2bjVXqAu/fosscommunity-in-poddery-com-maintainer-s-group Loomio group] - Mainly used for decision making
* Matrix room - [https://matrix.to/#/#poddery:poddery.com #poddery:poddery.com] also bridged to xmpp [xmpp:poddery.com-support@chat.yax.im?join poddery.com-support@chat.yax.im]
* [https://git.fosscommunity.in/community/poddery.com/issues Issue tracker] - Used for tracking progress of tasks
 
=== Contact ===
* Email: poddery at autistici.org (alias that reaches Akhilan, Abhijith Balan, Fayad, Balasankar, Julius, Praveen, Prasobh, Sruthi, Shirish, Vamsee and Manukrishnan)
* The following people have their GPG keys in the [[#Server_Access|access file]]:
** ID: 0xCE1F9C674512C22A - Praveen Arimbrathodiyil (piratepin)
** ID: 0xB77D2E2E23735427 - Balasankar C
** ID: 0x5D0064186AF037D9 - Manu Krishnan T V
** ID: 0x51C954405D432381 - Fayad Fami (fayad)
** ID: 0x863D4DF2ED9C28EF - Abhijith PA
** ID: 0x6EF48CCD865A1FFC - Syam G Krishnan (sgk)
** ID: 0xFD49D0BC6FEAECDA - Sagar Ippalpalli
** ID: 0x92FDAB42A95FF20C - Pirate Bady (piratesin)
** ID: 0x0B1955F40C691CCE - Kannan
** ID: 0x32FF6C6F5B7AE248 - Akhil Varkey
** ID: 0xFBB7061C27CB70C1 - Ranjith Siji
** ID: 0xEAAFE4A8F39DE34F - Kiran S Kunjumon (hacksk)
* It's recommended to setup [http://www.vim.org/scripts/script.php?script_id=3645 Vim GnuPG Plugin] for transparent editing. Those who are new to GPG can follow [https://www.madboa.com/geek/gpg-quickstart/ this guide].
 
=== Server Access ===
Maintained in a private git repo at https://git.fosscommunity.in/community/access
 
= Configuration and Maintenance =
 
Boot into rescue system using https://docs.hetzner.com/robot/dedicated-server/troubleshooting/hetzner-rescue-system
 
== Disk Partitioning ==
* RAID 1 setup on 2x2TB HDDs (<code>sda</code> and <code>sdb</code>).
mdadm --verbose --create /dev/mdX --level=mirror --raid-devices=2 /dev/sdaY /dev/sdbY
* Separate partitions for swap (<code>md0</code> - 16GB), boot (<code>md1</code> - 512MB) and root (<code>md2</code> - 50GB).
* LVM on Luks for separate encrypted data partitions for database, static files and logs.
# Setup LUKS (make sure <code>lvm2</code>, <code>udev</code> and <code>cryptsetup</code> packages are installed).
cryptsetup luksFormat /dev/mdX
# Give disk encryption password as specified in the [[#Server_Access|access repo]]
cryptsetup luksOpen /dev/mdX poddery
# LVM Setup
# Create physical volume named <code>poddery</code>
pvcreate /dev/mapper/poddery
# Create volume group named <code>data</code>
vgcreate data /dev/mapper/poddery
# Create logical volumes named <code>log</code>, <code>db</code> and <code>static</code>
lvcreate -n log /dev/data -L 50G
lvcreate -n db /dev/data -L 500G
# Assign remaining free space for static files
lvcreate -n static /dev/data -l 100%FREE
# Setup filesystem on the logical volumes
mkfs.ext4 /dev/data/log
mkfs.ext4 /dev/data/db
mkfs.ext4 /dev/data/static
# Create directories for mounting the encrypted partitions
mkdir /var/lib/db /var/lib/static /var/log/poddery
# Manually mount encrypted partitions. This is needed on each reboot as Hetzner doesn't provide a web console so that we can't decrypt the partitions during booting.
mount /dev/data/db /var/lib/db
mount /dev/data/static /var/lib/static
mount /dev/data/log /var/log/poddery
 
== Hardening checklist ==
* SSH password based login disabled (allow only key based logins)
* SSH login disabled for root user (use a normal user with sudo)
# Check for the following settings in /etc/ssh/sshd_config:
...
PermitRootLogin no
...
PasswordAuthentication no
...
 
* <code>ufw</code> firewall enabled with only the ports that needs to be opened ([https://fxdata.cloud/tutorials/set-up-a-firewall-with-ufw-on-ubuntu-16-04 ufw tutorial]):
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
ufw allow http/tcp
ufw allow https/tcp
ufw allow Turnserver
ufw allow XMPP
ufw allow 8448


We run Debian 9 Stretch image provided by Scaleway, with latest security updates applied.
ufw enable
# Verify everything is setup properly
ufw status
# Enable ufw logging with default mode low
ufw logging on


=== Hardening checklist ===
* <code>fail2ban</code> configured against brute force attacks:
* SSH password login disabled (allow only key based logins)
# Check for the following line <code>/etc/ssh/sshd_config</code>
* root SSH login disabled (use a normal user with sudo)
...
'''/etc/ssh/sshd_config:'''
LogLevel VERBOSE
  ...
...
  PermitRootLogin no
  ...
# Restart SSH and enable fail2ban
  PasswordAuthentication no
systemctl restart ssh
  ...
systemctl enable fail2ban
* Firewall enabled with only the ports we need opened ([https://fxdata.cloud/tutorials/set-up-a-firewall-with-ufw-on-ubuntu-16-04 ufw tutorial])
systemctl start fail2ban
  sudo ufw default deny incoming
  sudo ufw default allow outgoing
# To unban an IP, first check <code>/var/log/fail2ban.log</code> to get the banned IP and then run the following
  sudo ufw allow ssh
# Here <code>sshd</code> is the defaut jail name, change it if you are using a different jail
  sudo ufw enable
fail2ban-client set sshd unbanip <banned_ip>
Currently ufw is disabled as it is crashing the server.


* fail2ban configured against brute force attacks
== Diaspora ==
'''/etc/ssh/sshd_config:'''
* Install <code>diaspora-installer</code> from Debian Buster contrib:
  ...
apt install diaspora-installer
  LogLevel VERBOSE
  ...


  sudo systemctl restart ssh
* Move MySQL data to encrypted partition:
  sudo systemctl enable fail2ban
# Make sure <code>/dev/data/db</code> is mounted to <code>/var/lib/db</code>
  sudo systemctl start fail2ban
systemctl stop mysql
systemctl disable mysql
mv /var/lib/mysql /var/lib/db/
ln -s /var/lib/db/mysql /var/lib/
systemctl start mysql


Check '''/var/log/fail2ban.log''' for logs
* Move static files to encrypted partition:
# Make sure <code>/dev/data/static</code> is mounted to <code>/var/lib/static</code>
mkdir /var/lib/static/diaspora
mv /usr/share/diaspora/public/uploads /var/lib/static/diaspora
ln -s /var/lib/static/diaspora/uploads /usr/share/diaspora/public/
chown -R diaspora: /var/lib/static/diaspora


Unban an IP:
* Modify configuration files at <code>/etc/diaspora</code> and <code>/etc/diaspora.conf</code> as needed (backup of the current configuration files are available in the [[#Server_Access|access repo]]).
  sudo fail2ban-client set sshd unbanip <banned_ip>
* Homepage configuration:
# Make sure <code>git</code> and <code>acl</code> packages are installed
# Grant <code>rwx</code> permissions for the ssh user to <code>/usr/share/diaspora/public</code>
setfacl -m "u:<ssh_user>:rwx" /usr/share/diaspora/public
# Clone poddery.com repo
cd /usr/share/diaspora/public
git clone https://git.fosscommunity.in/community/poddery.com.git
cd poddery.com && mv * .[^.]* .. #Give yes for all files when prompted
cd .. && rmdir poddery.com


Here sshd is the defaut jail name, change it if you are using a different jail.
* [https://save.poddery.com Save Poddery] repo is maintained as a sub module in poddery.com repo. See this [https://chrisjean.com/git-submodules-adding-using-removing-and-updating/ tutorial] for working with git submodules.
# Clone save.poddery.com repo
cd /usr/share/diaspora/public/save
git submodule init
git submodule update


=== System health check ===
== Matrix ==
* See the [https://github.com/matrix-org/synapse/blob/master/INSTALL.md official installation guide] of Synapse for installing from source.
* Nginx is used as reverse proxy to send requests that has <code>/_matrix/*</code> in URL to Synapse on port <code>8008</code>. This is configured in <code>/etc/nginx/sites-enabled/diaspora</code>.
* Shamil's [https://git.fosscommunity.in/necessary129/synapse-diaspora-auth Synapse Diaspora Auth] script is used to authenticate Synapse with Diaspora database.
* Move PostgreSQL data to encrypted partition:
# Make sure <code>/dev/data/db</code> is mounted to <code>/var/lib/db</code>
systemctl stop postgresql
systemctl disable postgresql
mv /var/lib/postgres /var/lib/db/
ln -s /var/lib/db/postgres /var/lib/
systemctl start postgresql


* There should be a data disk attached (added from cloud.scaleway.com)
* Move static files to encrypted partition:
* The attached disk (/dev/nbdX) should be an lvm physical volume. We cannot use it directly for encryption, so we use lvm.
# Make sure <code>/dev/data/static</code> is mounted to <code>/var/lib/static</code>
  # Make sure '''lvm2''' and '''udev''' packages are installed
mkdir /var/lib/static/synapse
  sudo apt-get install lvm2 udev
mv /var/lib/matrix-synapse/uploads /var/lib/static/synapse/
 
ln -s /var/lib/static/synapse/uploads /var/lib/matrix-synapse/
  # Replace X with valid number according to '''lsblk'''
mv /var/lib/matrix-synapse/media /var/lib/static/synapse/
  sudo pvcreate /dev/nbdX
ln -s /var/lib/static/synapse/media /var/lib/matrix-synapse/
* /dev/data is an lvm volume group created from /dev/nbdX
chown -R matrix-synapse: /var/lib/static/synapse
  sudo vgcreate data /dev/nbdX
* /dev/data/diaspora is an lvm logical volume
  sudo lvcreate -n diaspora /dev/data -L <size_of_disk>
* /dev/mapper/diaspora is an encrypted device
  # Make sure '''cryptsetup''' package is installed
  sudo apt-get install cryptsetup


  # Give disk encryption password as specified in the [[#Server_Access|access repo]]
* Install identity server <code>mxisd</code> (<code>deb</code> package available [https://github.com/kamax-matrix/mxisd/blob/master/docs/install/debian.md here])
  sudo cryptsetup luksFormat /dev/data/diaspora
  sudo cryptsetup luksOpen /dev/data/diaspora diaspora
* /dev/mapper/diaspora is an ext4 file system
  sudo mkfs.ext4 /dev/mapper/diaspora
* /var/lib/diaspora should be mounted. All [[#Handling_critical_data|critical data]] should be on /var/lib/diaspora.
  sudo mkdir /var/lib/diaspora
  sudo mount /dev/mapper/diaspora /var/lib/diaspora


== User Visible Services ==
=== Workers ===
=== Diaspora ===
* For scalability, Poddery is running [https://github.com/matrix-org/synapse/blob/master/docs/workers.md workers]. Currently all workers specified in that page, expect <code>synapse.app.appservice</code> is running on poddery.com
* A new service [https://gist.github.com/necessary129/5dfbb140e4727496b0ad2bf801c10fdc <code>matrix-synapse@.service</code>] is installed for the workers (Save the <code>synape_worker</code> file somewhere like <code>/usr/local/bin/</code> or something).
* The worker config can be found at <code>/etc/matrix-synapse/workers</code>
* Synapse needs to be put under a reverse proxy see <code>/etc/nginx/sites-enabled/matrix</code>. A lot of <code>/_matrix/</code> urls needs to be overridden too see <code>/etc/nginx/sites-enabled/diaspora</code>
* These lines must be added to <code>homeserver.yaml</code> as we are running <code>media_repository</code>, <code>federation_sender</code>, <code>pusher</code>, <code>user_dir</code> workers respectively:
  enable_media_repo: False
  send_federation: False
  start_pushers: False
  update_user_directory: false


* We use diaspora-installer-mysql package from https://people.debian.org/~praveen/diaspora/README
* These services must be enabled:
* See [https://salsa.debian.org/ruby-team/diaspora-installer/blob/debian/0.6.6.0+debian1/README /usr/share/doc/diaspora-common/README] for package specific configuration.
* [https://poddery.com/statistics live statistics]


=== Chat/XMPP ===
matrix-synapse@synchrotron.service matrix-synapse@federation_reader.service matrix-synapse@event_creator.service matrix-synapse@federation_sender.service matrix-synapse@pusher.service matrix-synapse@user_dir.service matrix-synapse@media_repository.service matrix-synapse@frontend_proxy.service matrix-synapse@client_reader.service matrix-synapse@synchrotron_2.service


* We use Prosody and steps for setting up Prosody is given at -> https://wiki.debian.org/Diaspora/XMPP
To load balance between the 2 synchrotrons, We are running [https://github.com/Sorunome/matrix-synchrotron-balancer matrix-synchrotron-balancer]. It has a systemd file at <code>/etc/systemd/system/matrix-synchrotron-balancer</code>. The files are in <code>/opt/matrix-synchrotron-balancer</code>
  # Follow steps 1 to 6 from https://wiki.debian.org/Diaspora/XMPP
  sudo mysql -u root -p # Enter password from the access repo
 
  CREATE USER 'prosody'@'localhost' IDENTIFIED BY '<passwd_in_repo>';
  GRANT ALL PRIVILEGES ON diaspora_production.* TO 'prosody'@'localhost';
  FLUSH PRIVILEGES;
 
  sudo chown -R root:ssl-cert /etc/letsencrypt
  sudo chmod g+r -R /etc/letsencrypt
  sudo chmod g+x /etc/letsencrypt/{archive,live}
 
  sudo systemctl restart prosody
* We have enabled all XEPs conversations expect. We use sslh to multiplex Diaspora and Prosody on port 443. See [https://wiki.debian.org/InstallingProsody#XMPP_over_HTTPS XMPP over HTTPS] section of the Installing Prosody article in Debian Wiki for sample sslh configuration.


==== Set Nginx Conf for BOSH URLS ====
=== Synapse Updation ===
* First check [https://matrix-org.github.io/synapse/latest/upgrade synapse//latest/upgrade] to see if anything extra needs to be done. Then, just run <code>/root/upgrade-synapse</code>
* Current version of synapse can be found from https://poddery.com/_matrix/federation/v1/version


* Add this configuration in nginx configuration file to enable the BOSH url to make JSXC Working.
=== Riot-web Updation ===
* Just run the following (make sure to replace <code><version></code> with a proper version number like <code>v1.0.0</code>):
/var/www/get-riot <version>


'''Nginx'''
== Chat/XMPP ==
* Steps for setting up Prosody is given at https://wiki.debian.org/Diaspora/XMPP
# Follow steps 1 to 6 from https://wiki.debian.org/Diaspora/XMPP and then run the following:
mysql -u root -p # Enter password from the access repo
CREATE USER 'prosody'@'localhost' IDENTIFIED BY '<passwd_in_repo>';
GRANT ALL PRIVILEGES ON diaspora_production.* TO 'prosody'@'localhost';
FLUSH PRIVILEGES;
systemctl restart prosody


* Install plugins
# Make sure <code>mercurial</code> is installed
cd /etc && hg clone https://hg.prosody.im/prosody-modules/ prosody-modules


=== Set Nginx Conf for BOSH URLS ===
* Add the following in <code>nginx</code> configuration file to enable the BOSH URL to make JSXC Working:
  upstream chat_cluster {
  upstream chat_cluster {
   server localhost:5280;
   server localhost:5280;
Line 140: Line 283:
  }
  }


* [https://wiki.diasporafoundation.org/Integration/Chat#Nginx See here] for more details on <code>nginx</code> configuration. Alternatively, <code>apache</code> settings can be found [https://github.com/jsxc/jsxc/wiki/Prepare-apache here].


Plz look [https://wiki.diasporafoundation.org/Integration/Chat#Nginx here] for more details. And apache settings [https://github.com/jsxc/jsxc/wiki/Prepare-apache here] :)
== TLS ==
* Install <code>letsencrypt</code>.
* Ensure proper permissions are set for <code>/etc/letsencrypt</code> and its contents.
chown -R root:ssl-cert /etc/letsencrypt
chmod g+r -R /etc/letsencrypt
chmod g+x /etc/letsencrypt/{archive,live}
* Generate certificates. For more details see https://certbot.eff.org.
* Make sure the certificates used by <code>diaspora</code> are symbolic links to letsencrypt default location:
ls -l /etc/diaspora/ssl
''total 0
''lrwxrwxrwx 1 root root 47 Apr  2 22:47 poddery.com-bundle.pem -> /etc/letsencrypt/live/poddery.com/fullchain.pem''
''lrwxrwxrwx 1 root root 45 Apr  2 22:48 poddery.com.key -> /etc/letsencrypt/live/poddery.com/privkey.pem''
# If you don't get the above output, then run the following:
cp -L /etc/letsencrypt/live/poddery.com/fullchain.pem /etc/diaspora/ssl/poddery.com-bundle.pem
cp -L /etc/letsencrypt/live/poddery.com/privkey.pem /etc/diaspora/ssl/poddery.com.key


=== Chat/Matrix ===
* Make sure the certificates used by <code>prosody</code> are symbolic links to letsencrypt default location:
 
ls -l /etc/prosody/certs/
* We use Synapse server for setting up the Matrix server.
''total 0''
* We will be using https://github.com/matrix-org/synapse/#synapse-installation for setting up this instance
''lrwxrwxrwx 1 root root 40 Mar 28 01:16 poddery.com.crt -> /etc/letsencrypt/live/poddery.com/fullchain.pem''
* We use nginx reverse proxy to send requests that has ''/_matrix/*'' in url to synapse on 8008. See /etc/nginx/sites-enabled/diaspora
''lrwxrwxrwx 1 root root 33 Mar 28 01:16 poddery.com.key -> /etc/letsencrypt/live/poddery.com/privkey.pem''
* We use https://git.fosscommunity.in/necessary129/synapse-diaspora-auth to authenticate synapse with Diaspora database
 
# If you don't get the above output, then run the following:
==== Workers ====
cp -L /etc/letsencrypt/live/poddery.com/fullchain.pem /etc/prosody/certs/poddery.com.crt
 
cp -L /etc/letsencrypt/live/poddery.com/privkey.pem /etc/prosody/certs/poddery.com.key
For scalability, we are running [https://github.com/matrix-org/synapse/blob/master/docs/workers.rst workers]. Currently all workers specified in that page, expect `synapse.app.appservice` is running on poddery.com
 
A new service [https://gist.github.com/necessary129/5dfbb140e4727496b0ad2bf801c10fdc <code>matrix-synapse@.service</code>] is installed for the workers. (Save the <code>synape_worker</code> file somewhere like <code>/usr/local/bin/</code> or something.)
 
The worker config can be found at <code>/etc/matrix-synapse/workers</code>
 
Synapse needs to be put under a reverse proxy see <code>/etc/nginx/sites-enabled/matrix</code>. A lot of <code>/_matrix/</code> urls needs to be overridden too see <code>/etc/nginx/sites-enabled/diaspora</code>
 
These lines must be added to <code>homeserver.yaml</code> as we are running <code>media_repository</code>, <code>federation_sender</code>, <code>pusher</code>, <code>user_dir</code> workers respectively:
 
  enable_media_repo: False
  send_federation: False
  start_pushers: False
  update_user_directory: false
 
These services must be enabled, and added to <code>Requires</code> and <code>Before</code> sections of the original <code>matrix-synapse.service</code>:
  matrix-synapse@synchrotron.service matrix-synapse@federation_reader.service matrix-synapse@event_creator.service matrix-synapse@federation_sender.service matrix-synapse@pusher.service matrix-synapse@user_dir.service matrix-synapse@media_repository.service matrix-synapse@frontend_proxy.service matrix-synapse@client_reader.service
 
=== Homepage ===
 
Homepage and other static pages are maintained in our Gitlab instance. You can change it directly in the master branch or send pull requests. You can edit it via web as well.
 
* poddery.com -> https://git.fosscommunity.in/community/poddery.com
  # Make sure '''git''' and '''acl''' packages are installed
  sudo apt-get install git acl
 
  # Grant rwx permissions for the ssh user to /usr/share/diaspora/public
  sudo setfacl -m "u:<ssh_user>:rwx" /usr/share/diaspora/public
 
  # Clone poddery.com repo
  cd /usr/share/diaspora/public
  git clone https://git.fosscommunity.in/community/poddery.com.git
  cd poddery.com && mv * .[^.]* .. #Give yes for all files when prompted
  cd .. && rmdir poddery.com
 
* save.poddery.com -> https://git.fosscommunity.in/community/save.poddery.com
  cd /usr/share/diaspora/public/save
  git submodule init
  git submodule update
save.poddery.com repo is maintained as a sub module in poddery.com repo. See this tutorial -> https://chrisjean.com/git-submodules-adding-using-removing-and-updating/ for working with git submodules.
 
== Backend Services ==
=== nginx ===
 
Front-end for Diaspora and Matrix.
 
=== PostgreSQL ===
 
Backend for Matrix.


=== MySQL ===
* Note- letsencrypt executable used below is actually a symlik to /usr/bin/certbot
* Cron jobs:
crontab -e
''30 2 * * 1 letsencrypt renew  >> /var/log/le-renew.log''
''32 2 * * 1 /etc/init.d/nginx reload''
''34 2 * * 1 /etc/init.d/prosody reload''


Backend for Diaspora.  
* Manually updating TLS certificate:
letsencrypt certonly --webroot --agree-tos -w /usr/share/diaspora/public  -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -d fund.poddery.com -w /usr/share/diaspora/public/save -d save.poddery.com -w /var/www/riot -d chat.poddery.com
* To include an additional subdomain such as fund.poddery.com use with --expand parameter as shown below
letsencrypt certonly --webroot --agree-tos --expand -w /usr/share/diaspora/public -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -d fund.poddery.com -w /usr/share/diaspora/public/save/ -d save.poddery.com -w /var/www/riot/ -d chat.poddery.com


'''TODO''': Consider migrating to PostgreSQL to optimize resources (We can reduce one service and RAM usage).
==Backup==


=== exim ===
Backup server is provided by Manu (KVM virtual machine with 180 GB storage and 1 GB ram ).


For sending emails.
Debian Stetch was upgraded Debian Buster before database relication of synapse database.  
sudo dpkg-reconfigure exim4-config


=== sslh ===
Documentation: https://www.percona.com/blog/2018/09/07/setting-up-streaming-replication-postgresql/


Port multiplexer to allow XMPP and Diaspora to share 443 port. This allows us to fool stupid firewalls which blocks all ports except 80 and 443 (hence XMPP).
Currently postgres database for matrix-synapse is backed up.


NOTE: This service has been disabled since the community decided that XMPP service no longer needs to be served via port 443, see this [https://www.loomio.org/d/xSiI8FGT/xmpp-service-on-port-443-and-sslh-complexity loomio post] for more details.
===Before Replication (specific to poddery.com)===


=== SSL/TLS certificates ===
Setup tinc vpn in the backup server


  # letsencrypt certonly --webroot -w /usr/share/diaspora/public  -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -w /usr/share/diaspora/public/save -d save.poddery.com -w /var/www/riot -d chat.poddery.com
  # apt install tinc


# cp  -L /etc/letsencrypt/live/poddery.com/fullchain.pem /etc/diaspora/ssl/poddery.com-bundle.pem
Configure tinc by creating tinc.conf and host podderybackup under label fsci.
# cp -L /etc/letsencrypt/live/poddery.com/privkey.pem /etc/diaspora/ssl/poddery.com.key
Add tinc-up and tinc-down scripts
# chown -R root:ssl-cert /etc/letsencrypt
Copy poddery host config to backup server and podderybackup host config to poddery.com server.
# chmod g+r -R /etc/letsencrypt
# chmod g+x /etc/letsencrypt/*


Make sure the certificates used by prosody are symbolic links to letsencrypt default location.
Reload tinc vpn service at both poddery.com and backup servers


  # ls -l /etc/prosody/certs/
  # systemctl reload tinc@fsci.service
total 0
lrwxrwxrwx 1 root root 40 Mar 28 01:16 poddery.com.crt -> /etc/letsencrypt/live/poddery.com/fullchain.pem
lrwxrwxrwx 1 root root 33 Mar 28 01:16 poddery.com.key -> /etc/letsencrypt/live/poddery.com/privkey.pem


# crontab -e
Enable tinc@fsci systemd service for autostart
30 2 * * 1 letsencrypt renew  >> /var/log/le-renew.log
32 2 * * 1 /etc/init.d/nginx reload
34 2 * * 1 /etc/init.d/prosody reload


=== Handling critical data ===
# systemctl enable tinc@fsci.service
  sudo /etc/init.d/mysql stop
  sudo mv /var/lib/mysql /var/lib/diaspora
  sudo ln /var/lib/diaspora/mysql /var/lib/mysql
  sudo mkdir /var/lib/diaspora/uploads
  sudo chown -R diaspora: /var/lib/diaspora/uploads
  sudo ln -s /var/lib/diaspora/uploads /usr/share/diaspora/public/uploads


=== Services health check ===
The synapse database was also pruned to reduce the size before replication by following this guide - https://levans.fr/shrink-synapse-database.html
If you want to follow this guide, make sure matrix synapse server is updated to version 1.13 atleast since it introduces the Rooms API mentioned the guide.
Changes done to steps in the guide.


Sample output - Look for "Active: active (running)"
   # jq '.rooms[] | select(.joined_local_members == 0) | .room_id' < roomlist.json | sed -e 's/"//g' > to_purge.txt
 
   systemctl status nginx # Our web server front-end for Diaspora, XMPP and Matrix
  nginx.service - A high performance web server and a reverse proxy server
  Loaded: loaded (/lib/systemd/system/nginx.service; enabled)
  Active: active (running) since Fri 2018-01-05 07:17:02 UTC; 4 weeks 1 days ago
  Process: 5063 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid (code=exited, status=0/SUCCESS)
  Process: 13140 ExecReload=/usr/sbin/nginx -g daemon on; master_process on; -s reload (code=exited, status=0/SUCCESS)
  Process: 5071 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
  Process: 5067 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
  Main PID: 5072 (nginx)
  CGroup: /system.slice/nginx.service
          ├─ 5072 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
          ├─13149 nginx: worker process
          ├─13150 nginx: worker process
          ├─13151 nginx: worker process
          └─13153 nginx: worker process


  systemctl status diaspora # Diaspora service
The room list obtained this way can, be looped to pass the room names as variables to the purge api.  
  diaspora.service - LSB: Diaspora application server
  Loaded: loaded (/etc/init.d/diaspora)
  Active: active (running) since Fri 2018-01-05 07:21:29 UTC; 4 weeks 1 days ago
  Process: 5146 ExecStop=/etc/init.d/diaspora stop (code=exited, status=0/SUCCESS)
  Process: 5167 ExecStart=/etc/init.d/diaspora start (code=exited, status=0/SUCCESS)
  CGroup: /system.slice/diaspora.service
          ├─  850 unicorn worker[0] -c config/unicorn.rb -D
          ├─ 5174 sudo -u diaspora -E -H ./script/server
          ├─ 5175 eye monitoring v0.9.1 [diaspora] (in /usr/share/diaspora)
          ├─ 5211 sidekiq 4.2.9 diaspora [0 of 25 busy]
          ├─ 5222 unicorn master -c config/unicorn.rb -D
          └─31717 unicorn worker[1] -c config/unicorn.rb -D 
  systemctl status matrix-synapse.service # Synapse Matrix Server
  matrix-synapse.service - Synapse Matrix homeserver
  Loaded: loaded (/lib/systemd/system/matrix-synapse.service; enabled)
  Active: active (running) since Sat 2018-01-13 05:38:55 UTC; 3 weeks 1 days ago
  Process: 15800 ExecStartPre=/var/lib/matrix-synapse/synapse/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --generate-keys (code=exited, status=0/SUCCESS)
  Main PID: 15808 (python2.7)
  CGroup: /system.slice/matrix-synapse.service
          └─15808 /var/lib/matrix-synapse/synapse/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/


  systemctl status prosody # Prosody XMPP Server
# set +H // if you are using bash to avoid '!' in the roomname triggering the history substitution.
  prosody.service - LSB: Prosody XMPP Server
# for room_id in $(cat to_purge.txt); do curl --header "Authorization: Bearer <your access token>" \
  Loaded: loaded (/etc/init.d/prosody)
    -X POST -H "Content-Type: application/json" -d "{ \"room_id\": \"$room_id\" }" \
  Active: active (running) since Fri 2018-01-05 07:35:41 UTC; 4 weeks 1 days ago
    'https://127.0.0.1:8008/_synapse/admin/v1/purge_room'; done;
  Process: 6218 ExecStop=/etc/init.d/prosody stop (code=exited, status=0/SUCCESS)
  Process: 6483 ExecReload=/etc/init.d/prosody reload (code=exited, status=0/SUCCESS)
  Process: 6223 ExecStart=/etc/init.d/prosody start (code=exited, status=0/SUCCESS)
  CGroup: /system.slice/prosody.service
          └─6231 /usr/bin/lua5.1 /usr/bin/prosody


  systemctl status sslh # SSL/SSH multiplexer which allow us to provide multiple services via 443 port (to bypass stupid firewalls)
We also did not remove old history of large rooms.
  sslh.service - SSL/SSH multiplexer
  Loaded: loaded (/lib/systemd/system/sslh.service; enabled)
  Active: active (running) since Fri 2018-01-05 07:29:27 UTC; 4 weeks 1 days ago
    Docs: man:sslh(8)
  Main PID: 5444 (sslh)
  CGroup: /system.slice/sslh.service
          ├─  713 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg
          ├─  830 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg
          ├─ 1672 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg
          ├─ 1673 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg
          ├─ 3514 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg
          ├─ 3875 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg
          ├─ 3876 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg
          ├─ 3896 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg
          ├─ 4965 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg
          ├─ 5395 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg
          ├─ 5444 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg
          ├─ 5445 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg
          ├─ 5963 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg
          ├─ 6617 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg
          ├─ 6774 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg
          ├─ 6957 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg
          ├─ 7063 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg
          ├─ 7083 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg
          ├─25613 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg
          └─27481 /usr/sbin/sslh --foreground -F /etc/sslh/sslh.cfg


= Coordination =
===Step 1: Postgresql (for synapse) Primary configuration===


*[https://www.loomio.org/g/2bjVXqAu/fosscommunity-in-poddery-com-maintainer-s-group loomio group] - we use this for decision making.
Create postgresql user for replication.
* Hangout with us in our Matrix room [https://matrix.to/#/#poddery:poddery.com #poddery:poddery.com]
* [https://git.fosscommunity.in/community/poddery.com/issues issue tracker] - we use this to track progress of tasks


=== Contact ===
$ psql -c "CREATE USER replication REPLICATION LOGIN CONNECTION LIMIT 1 ENCRYPTED PASSWORD 'yourpassword';"
The password is in the access repo if you need it later.


Email: poddery at autistici.org (alias that reaches Akhilan, Abhijith Balan, Fayad, Balasankar, Julius, Praveen, Prasobh, Sruthi, Shirish, Vamsee and Manukrishnan)
Allow standby to connect to primary using the user just created.


The following people have their GPG keys in the password file.
$ cd /etc/postgresql/11/main


Praveen Arimbrathodiyil (piratepin) (ID: 0xCE1F9C674512), Balasankar C (ID: 0x96EDAB9B2E6B7171), Manu Krishnan T V (ID: 0x5D0064186AF037D9), Fayad Fami (fayad) (ID: 0x51C954405D432381), Abhijith PA (ID: 0x863D4DF2ED9C28EF), Syam G Krishnan (sgk) (ID: 0x6EF48CCD865A1FFC), Sagar Ippalpalli (ID: 0xFD49D0BC6FEAECDA), Pirate Bady (piratesin) (ID: 0x92FDAB42A95FF20C), Kannan (ID: 0x0B1955F40C691CCE)
$ nano pg_hba.conf


We recommend you setup [http://www.vim.org/scripts/script.php?script_id=3645 Vim GPG Plugin] for transparent editing. If you are new to GPG, then follow [https://www.madboa.com/geek/gpg-quickstart/ this guide].
Add below line to allow replication user to get access to the server


=== Server Access ===
host    replication    replication    172.16.0.3/32  md5


Maintained in a private git repo at -> https://git.fosscommunity.in/community/access
Next , open the postgres configuration file


= Setting up Backup =
nano postgresql.conf


Backup was setup on a Scaleway C1 VPS (4 core ARM processor with 2GB RAM). '''TODO: C1 server was crashing frequently and we need to setup backup again on VPS provided by Manu'''.
Set the following configuration options in the postgresql.conf file


Hostname (IP): backup.poddery.com (No public ip, access via scaleway.com web console). If you restart this machine, you may want to add poddery.com private ip in /etc/hosts
listen_addresses = 'localhost,172.16.0.2'
port=5432
wal_level = replica
max_wal_senders = 1
wal_keep_segments = 64
archive_mode = on
archive_command = 'cd .'


# apt-get install lvm2 cryptsetup
You need to restart since postgresql.conf was edited and parameters changed,


Directly creating luks volume on /dev/nbd1 is not working, so we use a logical volume
# systemctl restart postgresql


# pvcreate /dev/nbd1
===Step 2: Postgresql (for synapse) Standby configuration ===
# vgcreate data /dev/nbd1
# lvcreate -n diaspora -L 46.5G /dev/data


# cryptsetup luksFormat /dev/data/diaspora
Install postgresql
# cryptsetup luksOpen /dev/data/diaspora diaspora


and update /etc/crypttab
  # apt install postgresql
  # <target name> <source device>        <key file>      <options>
diaspora /dev/data/diaspora none luks


Check postgresql server is running


  # mkfs.ext4 /dev/mapper/diaspora
  # su postgres -c psql
# mkdir /var/lib/diaspora
and update /etc/fstab
# UNCONFIGURED FSTAB FOR BASE SYSTEM
/dev/mapper/diaspora /var/lib/diaspora ext4 defaults 0 2


# mount -a
Make sure en_US.UTF-8 locale is available
# apt-get install mysql-server


Move MySQL data directory to encrypted volume
  # dpkg-reconfigure locales
  # /etc/init.d/mysql stop
# mv /var/lib/mysql /var/lib/diaspora/
# ln -s /var/lib/diaspora/mysql /var/lib/mysql


Follow steps in https://dev.mysql.com/doc/refman/5.5/en/replication-howto-masterbaseconfig.html for replication
Stop postgresql before changing any configuration


Follow steps in https://www.howtoforge.com/how-to-set-up-mysql-database-replication-with-ssl-encryption-on-centos-5.4 for ssl (but ssl support is disabled in debian)
#systemctl stop postgresql@11-main


Follow steps in http://www.networkcomputing.com/storage/how-set-ssh-encrypted-mysql-replication/1111882674 to use ssh port forwarding to have encrypted replication
Switch to postgres user


  # adduser sshtunnel --disabled-login
  # su - postgres
  # su sshtunnel
  $ cd /etc/postgresql/11/


Generate SSH key pair and copy public key to target system
Copy data from master and create recovery.conf
$ ssh-keygen -t rsa
$ ssh -f sshtunnel@poddery.com -L 7777:127.0.0.1:3306 -N


Test the connectivity
  $ pg_basebackup -h git.fosscommunity.in -D /var/lib/postgresql/11/main/ -P -U rep --wal-method=fetch  -R
  # mysql -u poddery_backup -p -P 7777 -h 127.0.0.1


Uploads are rsynced every hour
Open the postgres configuration file


  # crontab -e
  $ nano postgresql.conf
# m h  dom mon dow  command
0 * * * * pgrep rsync || rsync -av --delete root@poddery.com:/var/lib/diaspora/uploads/ /var/lib/diaspora/uploads/ >/var/lib/diaspora/rsync-uploads.log


Set the following configuration options in the postgresql.conf file


'''Note:''' Since we are not using a public ip (saves us money), backup.poddery.com connects to poddery.com via private ip. So if poddery.com is rebooted, the new ip address should be updated in /etc/hosts file of backup.poddery.com. To connect, use the web console from scaleway.com
max_connections = 500 // This option and the one below are set to be same as in postgresql.conf at primary or the service won't start.
max_worker_processes = 16
host_standby = on // The above pg_basebackup command should set it. If it's not manually turn it to on.


= Add more disk space =
Start the stopped postgresql service


# Power off the machine with "ARCHIVE" option. It may take upto an hour for shutdown to complete on backup.poddery.com and poddery.com
# systemctl start postgresql@11-main
# Add more disk from scaleway.com control panel . Volumes -> CREATE VOLUME
# Attach the newly created volume to server from Server page
# Power on the server
# Create physical volume (pvcreate /dev/nbdN)
# Expand volume group (vgextend data /dev/nbdN)
# Expand logical volume (lvresize --size=186G data/diaspora)
# Expand encrypted partition (cryptsetup resize diaspora)
# Resize file system (resize2fs /dev/mapper/diaspora)


= Maintenance history =
===Postgresql (for synapse) Replication Status===
This section holds maintenance/issue history for future tracking.


'''When updating diaspora-installer-mysql packages, remember to recreate /usr/share/diaspora/public/uploads symlink to /var/lib/diaspora/uploads'''.
On Primary,


1. Prosody error - Failed to load private key
$ ps -ef | grep sender
$ psql -c "select * from pg_stat_activity where usename='rep';"


certmanager error SSL/TLS: Failed to load '/etc/letsencrypt/live/poddery.com/privkey.pem': Previous error (see logs), or other system error. (for poddery.com)
On Standby,
tls error  Unable to initialize TLS: error loading private key (system lib)
certmanager error SSL/TLS: Failed to load '/etc/letsencrypt/live/poddery.com/privkey.pem': Check that the permissions allow Prosody to read this file.


This error is usually when ssl certificate in freshly installed or renewed. Prosody user is unable to access the key file due to lack of privileges.
$ ps -ef | grep receiver


Note that Poddery uses Letsencrypt for ssl.
= Troubleshooting =
== Allow XMPP login even if diaspora account is closed ==
Diaspora has a [https://github.com/diaspora/diaspora/blob/develop/Changelog.md#new-maintenance-feature-to-automatically-expire-inactive-accounts default setting] to close accounts that have been inactive for 2 years. At the time of writing, there seems [https://github.com/diaspora/diaspora/issues/5358#issuecomment-371921462 no way] to reopen a closed account. This also means that if your account is closed, you will no longer be able to login to the associated XMPP service as well. Here we discuss a workaround to get access back to the XMPP account.


Fix:  
The prosody module [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua mod_auth_diaspora] is used for diaspora-based XMPP auth. It checks if <code>locked_at</code> value in the <code>users</code> table of diaspora db is <code>null</code> [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua-L89 here] and [https://gist.github.com/jhass/948e8e8d87b9143f97ad#file-mod_auth_diaspora-lua-L98 here]. If your account is locked, it will have the <code>datetime</code> value that represents the date and time at which your account is locked. Setting it back to <code>null</code> will let you use your XMPP account again.


* Make sure that prosody user is in 'certs' group (this group may also be called ssl-certs as setup by Letencrypt)
-- Replace <username> with actual username of the locked account
* /etc/letsencrypt/ is the ssl directory.
UPDATE users SET locked_at=NULL WHERE username='<username>';
* Prosody user should have permissions to all folders importantly archive and live folders in /etc/letsencrypt. Permissions to each folder must be 750.
* Troubleshoot by checking if you can switch to each folder in /etc/letsencrypt as prosody user and cat the files.


'''If replication fails, you can restart it following the instructions here'''
NOTE: Matrix account won't be affected even if the associated diaspora account is closed because it uses a [https://pypi.org/project/synapse-diaspora-auth/ custom auth module] which works differently.


https://dba.stackexchange.com/questions/69394/mysql-replication-error-1594
= History =
* [[Poddery/Archive|See here]] for the archive of Poddery wiki page before the migration to Hetzner.


[[Category:Services]]
[[Category:Services]]

Latest revision as of 15:45, 28 November 2023

We run decentralized and federated Diaspora social netowrk, XMPP and Matrix instant messaging services at poddery.com. Along with Diaspora, Poddery username and password can be used to access XMPP and Matrix services as well. chat.poddery.com provides Riot client (accessed by a web browser), which can be used to connect to any Matrix server without installing a Riot app/client.

Environment

Hosting

Poddery is hosted at Hetzner with the following specs:

  • Intel Xeon E3-1246V3 Process - 4 Cores, 3.5GHz
  • 4TB HDD
  • 32GB DDR3 RAM

Operating System

  • Debian Buster

User Visible Services

Diaspora

Chat/XMPP

  • Prosody is used as the XMPP server which is modern and lightweight.
  • Currently installed version is 0.11.2 which is available in Debian Buster.
  • All XEPs are enabled which the Conversations app support.

Chat/Matrix

Homepage

Homepage and other static pages are maintained in FSCI GitLab instance.

Backend Services

Web Server / Reverse Proxy

  • Nginx web server which also acts as front-end (reverse proxy) for Diaspora and Matrix.

Database

  • PostgreSQL for Matrix
  • MySQL for Diaspora

TODO: Consider migrating to PostgreSQL to optimize resources (We can reduce one service and RAM usage).

Email

  • Exim

SSL/TLS certificates

  • Let's Encrypt

Firewall

  • UFW (Uncomplicated Firewall)

Intrusion Prevention

  • Fail2ban

Coordination

Contact

  • Email: poddery at autistici.org (alias that reaches Akhilan, Abhijith Balan, Fayad, Balasankar, Julius, Praveen, Prasobh, Sruthi, Shirish, Vamsee and Manukrishnan)
  • The following people have their GPG keys in the access file:
    • ID: 0xCE1F9C674512C22A - Praveen Arimbrathodiyil (piratepin)
    • ID: 0xB77D2E2E23735427 - Balasankar C
    • ID: 0x5D0064186AF037D9 - Manu Krishnan T V
    • ID: 0x51C954405D432381 - Fayad Fami (fayad)
    • ID: 0x863D4DF2ED9C28EF - Abhijith PA
    • ID: 0x6EF48CCD865A1FFC - Syam G Krishnan (sgk)
    • ID: 0xFD49D0BC6FEAECDA - Sagar Ippalpalli
    • ID: 0x92FDAB42A95FF20C - Pirate Bady (piratesin)
    • ID: 0x0B1955F40C691CCE - Kannan
    • ID: 0x32FF6C6F5B7AE248 - Akhil Varkey
    • ID: 0xFBB7061C27CB70C1 - Ranjith Siji
    • ID: 0xEAAFE4A8F39DE34F - Kiran S Kunjumon (hacksk)
  • It's recommended to setup Vim GnuPG Plugin for transparent editing. Those who are new to GPG can follow this guide.

Server Access

Maintained in a private git repo at https://git.fosscommunity.in/community/access

Configuration and Maintenance

Boot into rescue system using https://docs.hetzner.com/robot/dedicated-server/troubleshooting/hetzner-rescue-system

Disk Partitioning

  • RAID 1 setup on 2x2TB HDDs (sda and sdb).
mdadm --verbose --create /dev/mdX --level=mirror --raid-devices=2 /dev/sdaY /dev/sdbY
  • Separate partitions for swap (md0 - 16GB), boot (md1 - 512MB) and root (md2 - 50GB).
  • LVM on Luks for separate encrypted data partitions for database, static files and logs.
# Setup LUKS (make sure lvm2, udev and cryptsetup packages are installed).
cryptsetup luksFormat /dev/mdX
# Give disk encryption password as specified in the access repo
cryptsetup luksOpen /dev/mdX poddery

# LVM Setup
# Create physical volume named poddery
pvcreate /dev/mapper/poddery
# Create volume group named data
vgcreate data /dev/mapper/poddery
# Create logical volumes named log, db and static
lvcreate -n log /dev/data -L 50G
lvcreate -n db /dev/data -L 500G
# Assign remaining free space for static files
lvcreate -n static /dev/data -l 100%FREE 

# Setup filesystem on the logical volumes
mkfs.ext4 /dev/data/log
mkfs.ext4 /dev/data/db
mkfs.ext4 /dev/data/static

# Create directories for mounting the encrypted partitions
mkdir /var/lib/db /var/lib/static /var/log/poddery

# Manually mount encrypted partitions. This is needed on each reboot as Hetzner doesn't provide a web console so that we can't decrypt the partitions during booting.
mount /dev/data/db /var/lib/db
mount /dev/data/static /var/lib/static
mount /dev/data/log /var/log/poddery

Hardening checklist

  • SSH password based login disabled (allow only key based logins)
  • SSH login disabled for root user (use a normal user with sudo)
# Check for the following settings in /etc/ssh/sshd_config:
...
PermitRootLogin no
...
PasswordAuthentication no
...
  • ufw firewall enabled with only the ports that needs to be opened (ufw tutorial):
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
ufw allow http/tcp
ufw allow https/tcp
ufw allow Turnserver
ufw allow XMPP
ufw allow 8448
ufw enable

# Verify everything is setup properly
ufw status

# Enable ufw logging with default mode low
ufw logging on
  • fail2ban configured against brute force attacks:
# Check for the following line /etc/ssh/sshd_config
...
LogLevel VERBOSE
...

# Restart SSH and enable fail2ban
systemctl restart ssh
systemctl enable fail2ban
systemctl start fail2ban

# To unban an IP, first check /var/log/fail2ban.log to get the banned IP and then run the following
# Here sshd is the defaut jail name, change it if you are using a different jail
fail2ban-client set sshd unbanip <banned_ip>

Diaspora

  • Install diaspora-installer from Debian Buster contrib:
apt install diaspora-installer
  • Move MySQL data to encrypted partition:
# Make sure /dev/data/db is mounted to /var/lib/db
systemctl stop mysql
systemctl disable mysql
mv /var/lib/mysql /var/lib/db/
ln -s /var/lib/db/mysql /var/lib/
systemctl start mysql
  • Move static files to encrypted partition:
# Make sure /dev/data/static is mounted to /var/lib/static
mkdir /var/lib/static/diaspora
mv /usr/share/diaspora/public/uploads /var/lib/static/diaspora
ln -s /var/lib/static/diaspora/uploads /usr/share/diaspora/public/
chown -R diaspora: /var/lib/static/diaspora
  • Modify configuration files at /etc/diaspora and /etc/diaspora.conf as needed (backup of the current configuration files are available in the access repo).
  • Homepage configuration:
# Make sure git and acl packages are installed
# Grant rwx permissions for the ssh user to /usr/share/diaspora/public
setfacl -m "u:<ssh_user>:rwx" /usr/share/diaspora/public

# Clone poddery.com repo
cd /usr/share/diaspora/public
git clone https://git.fosscommunity.in/community/poddery.com.git
cd poddery.com && mv * .[^.]* .. #Give yes for all files when prompted
cd .. && rmdir poddery.com
  • Save Poddery repo is maintained as a sub module in poddery.com repo. See this tutorial for working with git submodules.
# Clone save.poddery.com repo
cd /usr/share/diaspora/public/save
git submodule init
git submodule update

Matrix

  • See the official installation guide of Synapse for installing from source.
  • Nginx is used as reverse proxy to send requests that has /_matrix/* in URL to Synapse on port 8008. This is configured in /etc/nginx/sites-enabled/diaspora.
  • Shamil's Synapse Diaspora Auth script is used to authenticate Synapse with Diaspora database.
  • Move PostgreSQL data to encrypted partition:
# Make sure /dev/data/db is mounted to /var/lib/db
systemctl stop postgresql
systemctl disable postgresql
mv /var/lib/postgres /var/lib/db/
ln -s /var/lib/db/postgres /var/lib/
systemctl start postgresql
  • Move static files to encrypted partition:
# Make sure /dev/data/static is mounted to /var/lib/static
mkdir /var/lib/static/synapse
mv /var/lib/matrix-synapse/uploads /var/lib/static/synapse/
ln -s /var/lib/static/synapse/uploads /var/lib/matrix-synapse/
mv /var/lib/matrix-synapse/media /var/lib/static/synapse/
ln -s /var/lib/static/synapse/media /var/lib/matrix-synapse/
chown -R matrix-synapse: /var/lib/static/synapse
  • Install identity server mxisd (deb package available here)

Workers

  • For scalability, Poddery is running workers. Currently all workers specified in that page, expect synapse.app.appservice is running on poddery.com
  • A new service matrix-synapse@.service is installed for the workers (Save the synape_worker file somewhere like /usr/local/bin/ or something).
  • The worker config can be found at /etc/matrix-synapse/workers
  • Synapse needs to be put under a reverse proxy see /etc/nginx/sites-enabled/matrix. A lot of /_matrix/ urls needs to be overridden too see /etc/nginx/sites-enabled/diaspora
  • These lines must be added to homeserver.yaml as we are running media_repository, federation_sender, pusher, user_dir workers respectively:
 enable_media_repo: False
 send_federation: False
 start_pushers: False
 update_user_directory: false
  • These services must be enabled:
matrix-synapse@synchrotron.service matrix-synapse@federation_reader.service matrix-synapse@event_creator.service matrix-synapse@federation_sender.service matrix-synapse@pusher.service matrix-synapse@user_dir.service matrix-synapse@media_repository.service matrix-synapse@frontend_proxy.service matrix-synapse@client_reader.service matrix-synapse@synchrotron_2.service

To load balance between the 2 synchrotrons, We are running matrix-synchrotron-balancer. It has a systemd file at /etc/systemd/system/matrix-synchrotron-balancer. The files are in /opt/matrix-synchrotron-balancer

Synapse Updation

Riot-web Updation

  • Just run the following (make sure to replace <version> with a proper version number like v1.0.0):
/var/www/get-riot <version>

Chat/XMPP

# Follow steps 1 to 6 from https://wiki.debian.org/Diaspora/XMPP and then run the following:
mysql -u root -p # Enter password from the access repo

CREATE USER 'prosody'@'localhost' IDENTIFIED BY '<passwd_in_repo>';
GRANT ALL PRIVILEGES ON diaspora_production.* TO 'prosody'@'localhost';
FLUSH PRIVILEGES;

systemctl restart prosody
  • Install plugins
# Make sure mercurial is installed
cd /etc && hg clone https://hg.prosody.im/prosody-modules/ prosody-modules

Set Nginx Conf for BOSH URLS

  • Add the following in nginx configuration file to enable the BOSH URL to make JSXC Working:
upstream chat_cluster {
  server localhost:5280;
}
location /http-bind {
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header Host $http_host;
  proxy_set_header X-Forwarded-Proto https;
  proxy_redirect off;
  proxy_connect_timeout 5;
  proxy_buffering       off;
  proxy_read_timeout    70;
  keepalive_timeout     70;
  send_timeout          70;
  client_max_body_size 4M;
  client_body_buffer_size 128K;
  proxy_pass http://chat_cluster;
}
  • See here for more details on nginx configuration. Alternatively, apache settings can be found here.

TLS

  • Install letsencrypt.
  • Ensure proper permissions are set for /etc/letsencrypt and its contents.
chown -R root:ssl-cert /etc/letsencrypt
chmod g+r -R /etc/letsencrypt
chmod g+x /etc/letsencrypt/{archive,live}
  • Generate certificates. For more details see https://certbot.eff.org.
  • Make sure the certificates used by diaspora are symbolic links to letsencrypt default location:
ls -l /etc/diaspora/ssl
total 0
lrwxrwxrwx 1 root root 47 Apr  2 22:47 poddery.com-bundle.pem -> /etc/letsencrypt/live/poddery.com/fullchain.pem
lrwxrwxrwx 1 root root 45 Apr  2 22:48 poddery.com.key -> /etc/letsencrypt/live/poddery.com/privkey.pem

# If you don't get the above output, then run the following:
cp -L /etc/letsencrypt/live/poddery.com/fullchain.pem /etc/diaspora/ssl/poddery.com-bundle.pem
cp -L /etc/letsencrypt/live/poddery.com/privkey.pem /etc/diaspora/ssl/poddery.com.key
  • Make sure the certificates used by prosody are symbolic links to letsencrypt default location:
ls -l /etc/prosody/certs/
total 0
lrwxrwxrwx 1 root root 40 Mar 28 01:16 poddery.com.crt -> /etc/letsencrypt/live/poddery.com/fullchain.pem
lrwxrwxrwx 1 root root 33 Mar 28 01:16 poddery.com.key -> /etc/letsencrypt/live/poddery.com/privkey.pem

# If you don't get the above output, then run the following:
cp -L /etc/letsencrypt/live/poddery.com/fullchain.pem /etc/prosody/certs/poddery.com.crt
cp -L /etc/letsencrypt/live/poddery.com/privkey.pem /etc/prosody/certs/poddery.com.key
  • Note- letsencrypt executable used below is actually a symlik to /usr/bin/certbot
  • Cron jobs:
crontab -e
30 2 * * 1 letsencrypt renew  >> /var/log/le-renew.log
32 2 * * 1 /etc/init.d/nginx reload
34 2 * * 1 /etc/init.d/prosody reload
  • Manually updating TLS certificate:
letsencrypt certonly --webroot --agree-tos -w /usr/share/diaspora/public  -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -d fund.poddery.com -w /usr/share/diaspora/public/save -d save.poddery.com -w /var/www/riot -d chat.poddery.com
  • To include an additional subdomain such as fund.poddery.com use with --expand parameter as shown below
letsencrypt certonly --webroot --agree-tos --expand -w /usr/share/diaspora/public -d poddery.com -d www.poddery.com -d test.poddery.com -d groups.poddery.com -d fund.poddery.com -w /usr/share/diaspora/public/save/ -d save.poddery.com -w /var/www/riot/ -d chat.poddery.com

Backup

Backup server is provided by Manu (KVM virtual machine with 180 GB storage and 1 GB ram ).

Debian Stetch was upgraded Debian Buster before database relication of synapse database.

Documentation: https://www.percona.com/blog/2018/09/07/setting-up-streaming-replication-postgresql/

Currently postgres database for matrix-synapse is backed up.

Before Replication (specific to poddery.com)

Setup tinc vpn in the backup server

# apt install tinc

Configure tinc by creating tinc.conf and host podderybackup under label fsci. Add tinc-up and tinc-down scripts Copy poddery host config to backup server and podderybackup host config to poddery.com server.

Reload tinc vpn service at both poddery.com and backup servers

# systemctl reload tinc@fsci.service

Enable tinc@fsci systemd service for autostart

# systemctl enable tinc@fsci.service

The synapse database was also pruned to reduce the size before replication by following this guide - https://levans.fr/shrink-synapse-database.html If you want to follow this guide, make sure matrix synapse server is updated to version 1.13 atleast since it introduces the Rooms API mentioned the guide. Changes done to steps in the guide.

 # jq '.rooms[] | select(.joined_local_members == 0) | .room_id' < roomlist.json | sed -e 's/"//g' > to_purge.txt

The room list obtained this way can, be looped to pass the room names as variables to the purge api.

# set +H // if you are using bash to avoid '!' in the roomname triggering the history substitution.
# for room_id in $(cat to_purge.txt); do curl --header "Authorization: Bearer <your access token>" \
   -X POST -H "Content-Type: application/json" -d "{ \"room_id\": \"$room_id\" }" \
   'https://127.0.0.1:8008/_synapse/admin/v1/purge_room'; done;

We also did not remove old history of large rooms.

Step 1: Postgresql (for synapse) Primary configuration

Create postgresql user for replication.

$ psql -c "CREATE USER replication REPLICATION LOGIN CONNECTION LIMIT 1 ENCRYPTED PASSWORD 'yourpassword';"

The password is in the access repo if you need it later.

Allow standby to connect to primary using the user just created.

$ cd /etc/postgresql/11/main
$ nano pg_hba.conf

Add below line to allow replication user to get access to the server

host    replication     replication     172.16.0.3/32   md5

Next , open the postgres configuration file

nano postgresql.conf

Set the following configuration options in the postgresql.conf file

listen_addresses = 'localhost,172.16.0.2'
port=5432
wal_level = replica
max_wal_senders = 1
wal_keep_segments = 64
archive_mode = on
archive_command = 'cd .'

You need to restart since postgresql.conf was edited and parameters changed,

# systemctl restart postgresql

Step 2: Postgresql (for synapse) Standby configuration

Install postgresql

# apt install postgresql

Check postgresql server is running

# su postgres -c psql

Make sure en_US.UTF-8 locale is available

# dpkg-reconfigure locales

Stop postgresql before changing any configuration

#systemctl stop postgresql@11-main

Switch to postgres user

# su - postgres
$ cd /etc/postgresql/11/

Copy data from master and create recovery.conf

$ pg_basebackup -h git.fosscommunity.in -D /var/lib/postgresql/11/main/ -P -U rep --wal-method=fetch  -R

Open the postgres configuration file

$ nano postgresql.conf

Set the following configuration options in the postgresql.conf file

max_connections = 500 // This option and the one below are set to be same as in postgresql.conf at primary or the service won't start.
max_worker_processes = 16
host_standby = on // The above pg_basebackup command should set it. If it's not manually turn it to on.

Start the stopped postgresql service

# systemctl start postgresql@11-main

Postgresql (for synapse) Replication Status

On Primary,

$ ps -ef | grep sender
$ psql -c "select * from pg_stat_activity where usename='rep';"

On Standby,

$ ps -ef | grep receiver

Troubleshooting

Allow XMPP login even if diaspora account is closed

Diaspora has a default setting to close accounts that have been inactive for 2 years. At the time of writing, there seems no way to reopen a closed account. This also means that if your account is closed, you will no longer be able to login to the associated XMPP service as well. Here we discuss a workaround to get access back to the XMPP account.

The prosody module mod_auth_diaspora is used for diaspora-based XMPP auth. It checks if locked_at value in the users table of diaspora db is null here and here. If your account is locked, it will have the datetime value that represents the date and time at which your account is locked. Setting it back to null will let you use your XMPP account again.

-- Replace <username> with actual username of the locked account
UPDATE users SET locked_at=NULL WHERE username='<username>';

NOTE: Matrix account won't be affected even if the associated diaspora account is closed because it uses a custom auth module which works differently.

History

  • See here for the archive of Poddery wiki page before the migration to Hetzner.