CyBrainZ
Published on

How to secure the self-hosted Nextcloud

Authors

Creating an encrypted and SIEM monitored Nextcloud instance. One server, one network interface.

freeing ourselves from big-tech companies

securing_nextcloud_banner_1200x630.png

Why self-hosted?

Even though clouds are becoming truly secure, better automated, and amazingly scalable, there are two problems they do not address:

  • First one is the discomfort of having to trust someone to manage your personal information. History teaches us that the more complex the infrastructure is, the more demanding becomes the protection of entrusted data. Big tech companies are no exception to this rule and are fighting against inevitable data breaches by investing in security and PR.
  • The second problem is the discomforting feeling of becoming a product yourself. The monetization of user's behavior analysis is becoming an integral part of every online service, and paying the service fees does not change that. Whether you pay for the service or not, some crazy AI is watching you and coming up with ideas about how to squeeze money out of you.

What shall be protected ...

We need to protect our most sensitive data such as documents containing personal information, contacts and calendars. Since we want to have this data available online, what we are actually protecting is a publicly accessible service, or a service on a pair of ports on a public IP address. The service we are talking about in our case is the self-hosted Nextcloud instance.

securing_nextcloud_what.png

... and against which threats?

Our goal is to protect ourselves from leakage of sensitive data so that we can sleep well knowing that our data is safe and that we can detect and respond to potential intrusions.

How to be protected?

We'll start with an ability to see security-related events. Obviously, we can't talk about security without seeing what's going on within our infrastructure. So, in addition to encrypting our data and strengthening the service itself, we will deploy the open-source SIEM Security Onion to accompany this service and provide us a deep level of visibility into security events.

Here is an example of how much data the SIEM collected during the first week of operation of this solution:

securing_nextcloud_dash.png

Another example shows how many alerts the SIEM generated from these events within the first week:

securing_nextcloud_alerts.png

Solution concept

The entire solution runs on a single server with one network interface with a public IP address configured:

securing_nextcloud_2_KVM_internal.png
  • The host computer has a network interface with a public IP. In our case, the name of this interface is enp1s0f1.
  • The host is hosting 3 virtual machines named nextcloud, vpn, and onion.
  • All VMs are in the same virtual network behind NAT. In our case, the virtual network is called default and is defined as the subnet 192.168.122.0/24. This network should be created automatically during the KVM installation.
  • The only ports accessible from the outside will be 80/TCP and 443/TCP for the web application, a high-range UDP port for VPN and a high-range TCP port for SSH. Let's say we have chosen 53287/UDP for the VPN and 53201/TCP for the SSH.
  • The web and VPN ports are being forwarded to the respective VMs via a qemu hook
  • Endpoint security monitoring is performed by the wazuh agent deployed on the host and all three virtual machines.
  • Network traffic monitoring is being done by mirroring all host traffic to the onion VM's network interface. Then, this network interface is being monitored by the Suricata IDS.

Requirements

In our example, we are using an older 1U server with 16-core Xeon E5540, 32GB RAM and a common 7200rpm 2TB SATA hard disk drive. The distribution of the RAM and disk space across the host and VMs might look like this:

VMmemorydisk space
nextcloud12 GB1500 GB
onion14 GB460 GB
vpn2 GB5 GB
host4 GB30 GB
32 GB2 TB
  • The more disk space we give the onion, the longer it can keep PCAP artifacts. 460GB should be enough for one or two users to keep several weeks of history.
  • The more CPU cores we have, the better. 16-core CPU seems to be pretty sufficient. When assigning CPU cores to VMs, feel free to set the maximum available on each of the VMs you create. We might play with the core reservation later, if we encounter some performance issues.
  • We make sure to use virtio drivers wherever possible.
  • The network interface must have public IP address (routable from the internet) configured. Also, this IP must have a proper DNS record. In our example we'll work with example.com

Installation

In this guide, we will go through all the installation steps in a linear order. At the very end, we will have the SIEM-monitored Nextcloud instance ready to use. One server with one network interface.

Host setup

We will use KVM (Kernel Virtual Machine) on Centos 7 Minimal.

Note that the end of life for Centos 7 will be in June 2024, so we should be thinking about moving to Rocky Linux (or some other distro) soon.

Before installing anything, we will do our due diligence to verify that the used software is legitimate. Therefore, we'll verify the ISO image according to the instructions on the Centos wiki and make it a habit.

Given that we have a 2TB of disk space this is how the partition scheme could look like:

NAME                   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                      8:0    0   1.8T  0 disk
├─sda1                   8:1    0   200M  0 part /boot/efi
├─sda2                   8:2    0     1G  0 part /boot
├─sda3                   8:3    0  37.8G  0 part
│ ├─centos-root        253:0    0    30G  0 lvm  /
│ └─centos-swap        253:1    0   7.8G  0 lvm  [SWAP]
└─sda4                   8:4    0   1.7T  0 part

OS and binaries

After the installation is completed, update the operating system and install the virtualization stack and some other tools we will need:

yum -y update && yum -y install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils git python3 wget

Start and enable the libvirtd service:

systemctl start libvirtd && systemctl enable libvirtd

We can afford to disable SElinux as we use additional security measures to secure our data. Set the SElinux to permissive mode:

setenforce 0

Make it permanent by changing the directive SELINUX= to permissive in the file /etc/selinux/config.

Obscure the SSH port (optional)

Before we go any further, we'll move the SSH service to some high-range port, like 53201/tcp, to make it a bit more difficult to find by botnets:

firewall-cmd --zone=public --permanent --add-port=53201/tcp
firewall-cmd --reload

Make sure the port is not 'filtered' but rather 'closed'. We can do that by using nmap from some remote device:

nmap -p 53201 -T4 <public IP>
  ...
  PORT      STATE    SERVICE
  53201/tcp closed     unknown
  ...
  • Closed = Firewall allows the traffic but the OS closes the connection because no service is listening on this port.
  • Filtered = Firewall immediately drops packets.

If the port is truly 'closed' and not 'filtered' instead, change the ssh server port in /etc/ssh/sshd_config in directive Port to the new value and reload the ssh service:

systemctl reload sshd

If all good, block the unused port number 22/TCP:

firewall-cmd --zone=public --remove-port=22/tcp --permanent
firewall-cmd --zone=public --remove-service=ssh --permanent

KVM resources

Create the lvm pool images for future virtual disks out of the free space on our dedicated partition /dev/sda4:

virsh pool-define-as images logical - - /dev/sda4 images /dev/images
Pool images defined

virsh pool-build images
Pool images built

Initialize, make it default and set to start on host's boot:

virsh pool-start images
Pool images started

virsh pool-autostart images
Pool images marked as autostarted

Create the virtual disk for each future VM:

virsh vol-create-as images nextcloud 1500G
Vol nextcloud created

virsh vol-create-as images vpn 5G
Vol vpn created

virsh vol-create-as images onion 460G
Vol onion created

ISO images

Next thing we need to do is to download all required ISO images. On the host create the directory /ISOs:

mkdir /ISOs

Download and verify all the ISOs:

Store them on the host in the /ISOs directory. Then define, activate and set the pool to autostart:

virsh pool-define-as ISOs dir - - - - /ISOs
Pool ISOs defined

virsh pool-start ISOs
Pool ISOs started

virsh pool-autostart ISOs
Pool ISOs marked as autostarted

Networking

Let's define IPs and MACs for future VM interfaces. Select some IP addresses from the default KVM network range of 192.168.122.0/24. To generate the MAC, you can run this crazy one-liner:

date +%s | md5sum | head -c 6 | sed -e 's/\([0-9A-Fa-f]\{2\}\)/\1:/g' -e 's/\(.*\):$/\1/' | sed -e 's/^/52:54:00:/'

Taken from ZenCoffe Blog article by James Young.

Then, our example might look like this:

systemIPMAC
nextcloud192.168.122.552:54:00:50:28:a7
onion192.168.122.5052:54:00:af:a2:d7
vpn192.168.122.15052:54:00:18:17:c1

Based on that, configure the static DHCP mappings in the KVM network:

virsh net-update default add ip-dhcp-host "<host mac='52:54:00:50:28:a7' name='nextcloud' ip='192.168.122.5'/>" --live --config
virsh net-update default add ip-dhcp-host "<host mac='52:54:00:af:a2:d7' name='onion' ip='192.168.122.50'/>" --live --config
virsh net-update default add ip-dhcp-host "<host mac='52:54:00:18:17:c1' name='vpn' ip='192.168.122.150'/>" --live --config

This will allow all VMs to obtain the IP configuration via DHCP so we don't have to care about the IP setup anymore.

Now it's getting more intrresting. We will add a bridge interface on which all the traffic will be mirrored and later monitored by the Security Onion sensor interface. Let's call this bridge interface br1 and define it by creating a file /etc/sysconfig/network-scripts/ifcfg-br1:

DEVICE=br1
STP=no
TYPE=Bridge
BOOTPROTO=none
DEFROUTE=yes
NAME=br1
ONBOOT=yes

Now bring ut up:

ifup br1

Now to the best part. We will mirror all the traffic going through our single physical network interface, in our case enp1s0f1, and send it to the br1 bridge we've just created. Let's make it a systemd service so that mirroring starts as soon as the network starts. Firstly, we'll create a simple script called /root/tap_on.bsh*:

#!/bin/bash
MONITOR_PORT=enp1s0f1
MIRROR_PORT=br1
# Ingress
tc qdisc add dev $MONITOR_PORT ingress
tc filter add dev $MONITOR_PORT parent ffff: protocol all u32 match u8 0 0 action mirred egress mirror dev $MIRROR_PORT
# Egress
tc qdisc add dev $MONITOR_PORT handle 1: root prio
tc filter add dev $MONITOR_PORT parent 1: protocol all u32 match u8 0 0 action mirred egress mirror dev $MIRROR_PORT

*This code was taken from port mirroring with linux bridges - article from 2014. Still working perfectly fine.

Set it to be executable by the owner:

chmod u+x /root/tap_on.bsh

Create systemd service configuration file /etc/systemd/system/tap.service:

[Unit]
Description=Tap
After=network.target

[Service]
StartLimitIntervalSec=0[Service]
Type=simple
Restart=always
RestartSec=1
User=root
ExecStart=/root/tap_on.bsh

[Install]
WantedBy=multi-user.target

Reload the systemd service files, then start and enable the new tap service:

systemctl daemon-reload && systemctl start tap.service && systemctl enable tap.service

Port forward

Now we want the publicly accessible port 52439/UDP on the host to be forwarded to port 1149/UDP on the future vpn VM. Similarly, the ports 443/TCP and 80/TCP to the future nextcloud VM. For this, we will use libvirt-hook-qemu by saschpe. Clone this git repository (as root) and install it:

git clone https://github.com/saschpe/libvirt-hook-qemu.git
cd libvirt-hook-qemu/
make install

Change the content of the file /etc/libvirt/hooks/hooks.json:

{
  "vpn": {
    "interface": "virbr0",
    "private_ip": "192.168.122.150",
    "port_map": {
      "udp": [[52439, 1194]]
    }
  },
  "nextcloud": {
    "interface": "virbr0",
    "private_ip": "192.168.122.5",
    "port_map": {
      "tcp": [
        [443, 443],
        [80, 80]
      ]
    }
  }
}

Restart the libvirtd to apply qemu hook:

systemctl restart libvirtd.service

Open the 52439/UDP, 443/TCP and 80/TCP ports on the host to be accessible from the Internet:

firewall-cmd --zone=public --permanent --add-port=52439/udp
firewall-cmd --zone=public --permanent --add-port=80/tcp
firewall-cmd --zone=public --permanent --add-port=443/tcp

Installing VMs

Use the virt-manager tool to create the corresponding virtual machines:

vpn

AttributeSetup
ISO image/ISOs/CentOS-7-x86_64-Minimal-2207-02.iso
Memory2048 MB
CPUsAll
Disk/dev/images/vpn
Disk busVirtIO
NIC Network sourceVirtual network 'default' : NAT
NIC Device modelvirtio
NIC MAC address52:54:00:18:17:c1

nextcloud

AttributeSetup
ISO image/ISOs/ubuntu-22.04.1-live-server-amd64.iso
Memory12288 MB
CPUsAll
Disk/dev/images/nextcloud
Disk busVirtIO
NIC Network sourceVirtual network 'default' : NAT
NIC Device modelvirtio
NIC MAC address52:54:00:50:28:a7

onion

AttributeSetup
ISO image/ISOs/securityonion-2.3.190-20221207.iso
Memory14336 MB
CPUsAll
Disk/dev/images/onion
Disk busVirtIO
NIC Network sourceVirtual network 'default' : NAT
NIC Device modelvirtio
NIC MAC address52:54:00:af:a2:d7
NIC1 Network sourceBridge br1: Host device vnet1
NIC1 Device modelvirtio

Set all to start at host boot.

Now we need to install each VM by following the separate guides:

After a successful completion, we should have a publicly exposed Nextcloud instance ready for immediate use, monitored by the SIEM.

Don't be worried, because the real fun is just at the beginning. Now we'll have to understand and evaluate each warning. That will lead us down a rabbit hole at the end of which we should be able to tell the purpose and impact of every packet and event in our infrastructure. Enjoy the ride with us!