README.caddy_frontend.rst 18.3 KB
Newer Older
1 2 3
==============
Caddy Frontend
==============
4

5
Frontend system using Caddy, based on apache-frontend software release, allowing to rewrite and proxy URLs like myinstance.myfrontenddomainname.com to real IP/URL of myinstance.
6

7
Caddy Frontend works using the master instance / slave instance design. It means that a single main instance of Caddy will be used to act as frontend for many slaves.
8 9 10 11

Software type
=============

12 13 14 15 16
Caddy frontend is available in 4 software types:
  * ``default`` : The standard way to use the Caddy frontend configuring everything with a few given parameters
  * ``custom-personal`` : This software type allow each slave to edit its Caddy configuration file
  * ``default-slave`` : XXX
  * ``custom-personal-slave`` : XXX
17 18 19


About frontend replication
20 21 22
==========================

Slaves of the root instance are sent as a parameter to requested frontends which will process them. The only difference is that they will then return the would-be published information to the root instance instead of publishing it. The root instance will then do a synthesis and publish the information to its slaves. The replicate instance only use 5 type of parameters for itself and will transmit the rest to requested frontends.
23

24
These parameters are:
25 26 27 28 29 30 31 32

  * ``-frontend-type`` : the type to deploy frontends with. (default to 2)
  * ``-frontend-quantity`` : The quantity of frontends to request (default to "default")
  * ``-frontend-i-state``: The state of frontend i
  * ``-frontend-config-i-foo``: Frontend i will be requested with parameter foo
  * ``-frontend-software-release-url``: Software release to be used for frontends, default to the current software release
  * ``-sla-i-foo`` : where "i" is the number of the concerned frontend (between 1 and "-frontend-quantity") and "foo" a sla parameter.

33
For example::
34 35 36 37 38 39 40 41 42 43 44 45 46

  <parameter id="-frontend-quantity">3</parameter>
  <parameter id="-frontend-type">custom-personal</parameter>
  <parameter id="-frontend-2-state">stopped</parameter>
  <parameter id="-sla-3-computer_guid">COMP-1234</parameter>
  <parameter id="-frontend-software-release-url">https://lab.nexedi.com/nexedi/slapos/raw/someid/software/caddy-frontend/software.cfg</parameter>


will request the third frontend on COMP-1234. All frontends will be of software type ``custom-personal``. The second frontend will be requested with the state stopped

*Note*: the way slaves are transformed to a parameter avoid modifying more than 3 lines in the frontend logic.

**Important NOTE**: The way you ask for slave to a replicate frontend is the same as the one you would use for the software given in "-frontend-quantity". Do not forget to use "replicate" for software type. XXXXX So far it is not possible to do a simple request on a replicate frontend if you do not know the software_guid or other sla-parameter of the master instance. In fact we do not know yet the software type of the "requested" frontends. TO BE IMPLEMENTED
47 48 49

XXX Should be moved to specific JSON File

50 51 52 53
Extra-parameter per frontend with default::

  ram-cache-size = 1G
  disk-cache-size = 8G
54 55 56 57

How to deploy a frontend server
===============================

58 59 60
This is to deploy an entire frontend server with a public IPv4.  If you want to use an already deployed frontend to make your service available via ipv4, switch to the "Example" parts.

First, you will need to request a "master" instance of Caddy Frontend with:
61

62 63
  * A ``domain`` parameter where the frontend will be available
  * A ``public-ipv4`` parameter to state which public IPv4 will be used
64 65

like::
66

67 68 69 70 71 72
  <?xml version='1.0' encoding='utf-8'?>
  <instance>
   <parameter id="domain">moulefrite.org</parameter>
   <parameter id="public-ipv4">xxx.xxx.xxx.xxx</parameter>
  </instance>

73 74
Then, it is possible to request many slave instances (currently only from slapconsole, UI doesn't work yet) of Caddy Frontend, like::

75
  instance = request(
76
    software_release=caddy_frontend,
77 78 79 80 81
    partition_reference='frontend2',
    shared=True,
    partition_parameter_kw={"url":"https://[1:2:3:4]:1234/someresource"}
  )

82 83 84
Those slave instances will be redirected to the "master" instance, and you will see on the "master" instance the associated proper directives of all slave instances.

Finally, the slave instance will be accessible from: https://someidentifier.moulefrite.org.
85

86 87
About SSL and SlapOS Master Zero Knowledge
==========================================
88 89 90

**IMPORTANT**: One Caddy can not serve more than one specific SSL site and be compatible with obsolete browser (i.e.: IE8). See http://wiki.apache.org/httpd/NameBasedSSLVHostsWithSNI

91 92
SSL keys and certificates are directly send to the frontend cluster in order to follow zero knowledge principle of SlapOS Master.

93 94
*Note*: Until master partition or slave specific certificate is uploaded each slave is served with fallback certificate.  This fallback certificate is self signed, does not match served hostname and results with lack of response on HTTPs.

95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115
Obtaining CA for KeDiFa
-----------------------

KeDiFa uses caucase and so it is required to obtain caucase CA certificate used to sign KeDiFa SSL certificate, in order to be sure that certificates are sent to valid KeDiFa.

The easiest way to do so is to use caucase.

On some secure and trusted box which will be used to upload certificate to master or slave frontend partition install caucase https://pypi.org/project/caucase/

Master and slave partition will return key ``kedifa-caucase-url``, so then create and start a ``caucase-updater`` service::

  caucase-updater \
    --ca-url "${kedifa-caucase-url}" \
    --cas-ca "${frontend_name}.caucased.ca.crt" \
    --ca "${frontend_name}.ca.crt" \
    --crl "${frontend_name}.crl"

where ``frontend_name`` is a frontend cluster to which you will upload the certificate (it can be just one slave).

Make sure it is automatically started when trusted machine reboots: you want to have it running so you can forget about it. It will keep KeDiFa's CA certificate up to date when it gets renewed so you know you are still talking to the same service as when you previously uploaded the certificate, up to the original upload.

116 117 118 119 120 121 122 123 124 125 126 127 128
Master partition
----------------

After requesting master partition it will return ``master-key-generate-auth-url`` and ``master-key-upload-url``.

Doing HTTP GET on ``master-key-generate-auth-url`` will return authentication token, which is used to communicate with ``master-key-upload-url``. This token shall be stored securely.

By doing HTTP PUT to ``master-key-upload-url`` with appended authentication token it is possible to upload PEM bundle of certificate, key and any accompanying CA certificates to the master.

Example sessions is::

  request(...)

129
  curl -g -X GET --cacert "${frontend_name}.ca.crt" --crlfile "${frontend_name}.crl" master-key-generate-auth-url
130 131 132 133
  > authtoken

  cat certificate.pem key.pem ca-bundle.pem > master.pem

134
  curl -g -X PUT --cacert "${frontend_name}.ca.crt" --crlfile "${frontend_name}.crl" --data-binary @master.pem master-key-upload-url+authtoken
135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156

This replaces old request parameters:

 * ``apache-certificate``
 * ``apache-key``
 * ``apache-ca-certificate``

(*Note*: They are still supported for backward compatibility, but any value send to the ``master-key-upload-url`` will supersede information from SlapOS Master.)

Slave partition
---------------

After requesting slave partition it will return ``key-generate-auth-url`` and ``key-upload-url``.

Doing HTTP GET on ``key-generate-auth-url`` will return authentication token, which is used to communicate with ``key-upload-url``. This token shall be stored securely.

By doing HTTP PUT to ``key-upload-url`` with appended authentication token it is possible to upload PEM bundle of certificate, key and any accompanying CA certificates to the master.

Example sessions is::

  request(...)

157
  curl -g -X GET --cacert "${frontend_name}.ca.crt" --crlfile "${frontend_name}.crl" key-generate-auth-url
158 159 160 161
  > authtoken

  cat certificate.pem key.pem ca-bundle.pem > master.pem

162
  curl -g -X PUT --cacert "${frontend_name}.ca.crt" --crlfile "${frontend_name}.crl" --data-binary @master.pem key-upload-url+authtoken
163 164 165 166 167 168 169 170 171 172

This replaces old request parameters:

 * ``ssl_crt``
 * ``ssl_key``
 * ``ssl_ca_crt``

(*Note*: They are still supported for backward compatibility, but any value send to the ``key-upload-url`` will supersede information from SlapOS Master.)


173 174 175 176 177 178
Instance Parameters
===================

Master Instance Parameters
--------------------------

179
The parameters for instances are described at `instance-caddy-input-schema.json <instance-caddy-input-schema.json>`_.
180 181 182 183 184

Here some additional informations about the parameters listed, below:

domain
~~~~~~
185 186 187

Name of the domain to be used (example: mydomain.com). Sub domains of this domain will be used for the slave instances (example: instance12345.mydomain.com). It is then recommended to add a wild card in DNS for the sub domains of the chosen domain like::

188
  *.mydomain.com. IN A 123.123.123.123
189 190

Using the IP given by the Master Instance.  "domain" is a mandatory Parameter.
191 192 193

public-ipv4
~~~~~~~~~~~
194
Public ipv4 of the frontend (the one Caddy will be indirectly listening to)
195 196 197

port
~~~~
198
Port used by Caddy. Optional parameter, defaults to 4443.
199 200 201

plain_http_port
~~~~~~~~~~~~~~~
202
Port used by Caddy to serve plain http (only used to redirect to https).
203 204 205 206 207 208
Optional parameter, defaults to 8080.


Slave Instance Parameters
-------------------------

209
The parameters for instances are described at `instance-slave-caddy-input-schema.json <instance-slave-caddy-input-schema.json>`_.
210 211 212 213 214 215 216

Here some additional informations about the parameters listed, below:

path
~~~~
Only used if type is "zope".

217
Will append the specified path to the "VirtualHostRoot" of the zope's VirtualHostMonster.
218 219 220 221 222 223

"path" is an optional parameter, ignored if not specified.
Example of value: "/erp5/web_site_module/hosting/"

url
~~~
224 225 226 227
Necessary to activate cache. ``url`` of backend to use.

``url`` is an optional parameter.

228 229 230 231
Example: http://mybackend.com/myresource

domain
~~~~~~
232 233 234 235 236 237 238

Necessary to activate cache.

The frontend will be accessible from this domain.

``domain`` is an optional parameter.

239 240 241 242
Example: www.mycustomdomain.com

enable_cache
~~~~~~~~~~~~
243 244 245 246

Necessary to activate cache.

``enable_cache`` is an optional parameter.
247

248 249 250 251 252 253 254 255
Functionalities for Caddy configuration
---------------------------------------

In the slave Caddy configuration you can use parameters that will be replaced during instantiation. They should be entered as python templates parameters ex: ``%(parameter)s``:

  * ``cache_access`` : url of the cache. Should replace backend url in configuration to use the cache
  * ``access_log`` : path of the slave error log in order to log in a file.
  * ``error_log`` : path of the slave access log in order to log in a file.
256
  * ``certificate`` : path to the certificate
257 258 259 260 261


Examples
========

262
Here are some example of how to make your SlapOS service available through an already deployed frontend.
263 264 265 266 267 268

Simple Example (default)
------------------------

Request slave frontend instance so that https://[1:2:3:4:5:6:7:8]:1234 will be
redirected and accessible from the proxy::
269

270
  instance = request(
271
    software_release=caddy_frontend,
272 273 274 275 276 277 278 279 280 281 282 283 284 285 286
    software_type="RootSoftwareInstance",
    partition_reference='my frontend',
    shared=True,
    partition_parameter_kw={
        "url":"https://[1:2:3:4:5:6:7:8]:1234",
    }
  )


Zope Example (default)
----------------------

Request slave frontend instance using a Zope backend so that
https://[1:2:3:4:5:6:7:8]:1234 will be redirected and accessible from the
proxy::
287

288
  instance = request(
289
    software_release=caddy_frontend,
290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306
    software_type="RootSoftwareInstance",
    partition_reference='my frontend',
    shared=True,
    partition_parameter_kw={
        "url":"https://[1:2:3:4:5:6:7:8]:1234",
        "type":"zope",
    }
  )


Advanced example 
-----------------

Request slave frontend instance using a Zope backend, with Varnish activated,
listening to a custom domain and redirecting to /erp5/ so that
https://[1:2:3:4:5:6:7:8]:1234/erp5/ will be redirected and accessible from
the proxy::
307

308
  instance = request(
309
    software_release=caddy_frontend,
310 311 312 313 314 315 316 317 318 319 320 321 322 323 324
    software_type="RootSoftwareInstance",
    partition_reference='my frontend',
    shared=True,
    partition_parameter_kw={
        "url":"https://[1:2:3:4:5:6:7:8]:1234",
        "enable_cache":"true",
        "type":"zope",
        "path":"/erp5",
        "domain":"mycustomdomain.com",
    }
  )

Simple Example 
---------------

325 326
Request slave frontend instance so that https://[1:2:3:4:5:6:7:8]:1234 will be::

327
  instance = request(
328
    software_release=caddy_frontend,
329 330 331 332 333 334 335
    software_type="RootSoftwareInstance",
    partition_reference='my frontend',
    shared=True,
    software_type="custom-personal",
    partition_parameter_kw={
        "url":"https://[1:2:3:4:5:6:7:8]:1234",

336 337 338 339
Simple Cache Example - XXX - to be written
------------------------------------------

Request slave frontend instance so that https://[1:2:3:4:5:6:7:8]:1234 will be::
340 341

  instance = request(
342
    software_release=caddy_frontend,
343 344 345 346 347 348 349 350 351
    software_type="RootSoftwareInstance",
    partition_reference='my frontend',
    shared=True,
    software_type="custom-personal",
    partition_parameter_kw={
        "url":"https://[1:2:3:4:5:6:7:8]:1234",
	"domain": "www.example.org",
	"enable_cache": "True",

352 353
Advanced example - XXX - to be written
--------------------------------------
354 355

Request slave frontend instance using custom apache configuration, willing to use cache and ssl certificates.
356
Listening to a custom domain and redirecting to /erp5/ so that
357 358
https://[1:2:3:4:5:6:7:8]:1234/erp5/ will be redirected and accessible from
the proxy::
359

360
  instance = request(
361
    software_release=caddy_frontend,
362 363 364 365 366 367 368 369 370 371 372 373
    software_type="RootSoftwareInstance",
    partition_reference='my frontend',
    shared=True,
    software_type="custom-personal",
    partition_parameter_kw={
        "url":"https://[1:2:3:4:5:6:7:8]:1234",
        "enable_cache":"true",
        "type":"zope",
        "path":"/erp5",
        "domain":"example.org",

    "ssl_key":"-----BEGIN RSA PRIVATE KEY-----
374 375 376 377 378 379 380 381 382 383 384
  XXXXXXX..........XXXXXXXXXXXXXXX
  -----END RSA PRIVATE KEY-----",
      "ssl_crt":'-----BEGIN CERTIFICATE-----
  XXXXXXXXXXX.............XXXXXXXXXXXXXXXXXXX
  -----END CERTIFICATE-----',
      "ssl_ca_crt":'-----BEGIN CERTIFICATE-----
  XXXXXXXXX...........XXXXXXXXXXXXXXXXX
  -----END CERTIFICATE-----',
      "ssl_csr":'-----BEGIN CERTIFICATE REQUEST-----
  XXXXXXXXXXXXXXX.............XXXXXXXXXXXXXXXXXX
  -----END CERTIFICATE REQUEST-----',
385 386 387
    }
  )

388 389 390 391 392 393 394 395 396 397 398
Promises
========

Note that in some cases promises will fail:

 * not possible to request frontend slave for monitoring (monitoring frontend promise)
 * no slaves present (configuration promise and others)
 * no cached slave present (configuration promise and others)

This is known issue and shall be tackled soon.

399 400 401 402 403 404 405
KeDiFa
======

Additional partition with KeDiFa (Key Distribution Facility) is by default requested on the same computer as master frontend partition.

By adding to the request keys like ``-sla-kedifa-<key>`` it is possible to provide SLA information for kedifa partition. Eg to put it on computer ``couscous`` it shall be ``-sla-kedifa-computer_guid: couscous``.

406 407 408 409 410 411
Notes
=====

It is not possible with slapos to listen to port <= 1024, because process are
not run as root.

412 413
Solution 1 (iptables)
---------------------
414 415

It is a good idea then to go on the node where the instance is
416
and set some ``iptables`` rules like (if using default ports)::
417 418 419

  iptables -t nat -A PREROUTING -p tcp -d {public_ipv4} --dport 443 -j DNAT --to-destination {listening_ipv4}:4443
  iptables -t nat -A PREROUTING -p tcp -d {public_ipv4} --dport 80 -j DNAT --to-destination {listening_ipv4}:8080
420 421
  ip6tables -t nat -A PREROUTING -p tcp -d {public_ipv6} --dport 443 -j DNAT --to-destination {listening_ipv6}:4443
  ip6tables -t nat -A PREROUTING -p tcp -d {public_ipv6} --dport 80 -j DNAT --to-destination {listening_ipv6}:8080
422

423
Where ``{public_ipv[46]}`` is the public IP of your server, or at least the LAN IP to where your NAT will forward to, and ``{listening_ipv[46]}`` is the private ipv4 (like 10.0.34.123) that the instance is using and sending as connection parameter.
424

425 426 427 428 429 430 431 432 433
Additionally in order to access the server by itself such entries are needed in ``OUTPUT`` chain (as the internal packets won't appear in the ``PREROUTING`` chain)::

  iptables -t nat -A OUTPUT -p tcp -d {public_ipv4} --dport 443 -j DNAT --to {listening_ipv4}:4443
  iptables -t nat -A OUTPUT -p tcp -d {public_ipv4} --dport 80 -j DNAT --to {listening_ipv4}:8080
  ip6tables -t nat -A OUTPUT -p tcp -d {public_ipv6} --dport 443 -j DNAT --to {listening_ipv6}:4443
  ip6tables -t nat -A OUTPUT -p tcp -d {public_ipv6} --dport 80 -j DNAT --to {listening_ipv6}:8080

Solution 2 (network capability)
-------------------------------
434

435
It is also possible to directly allow the service to listen on 80 and 443 ports using the following command::
436

437
  setcap 'cap_net_bind_service=+ep' /opt/slapgrid/$CADDY_FRONTEND_SOFTWARE_RELEASE_MD5/go.work/bin/caddy
438 439 440
  setcap 'cap_net_bind_service=+ep' /opt/slapgrid/$CADDY_FRONTEND_SOFTWARE_RELEASE_MD5/parts/6tunnel/bin/6tunnel

Then specify in the master instance parameters:
441

442 443
 * set ``port`` to ``443``
 * set ``plain_http_port`` to ``80``
444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461

Technical notes
===============

Instantiated cluster structure
------------------------------

Instantiating caddy-frontend results with a cluster in various partitions:

 * master (the controlling one)
 * kedifa (contains kedifa server)
 * caddy-frontend-N which contains the running processes to serve sites - this partition can be replicated by ``-frontend-quantity`` parameter

So it means sites are served in `caddy-frontend-N` partition, and this partition is structured as:

 * Caddy serving the browser
 * (optional) Apache Traffic Server for caching
 * Caddy connected to the backend
462 463 464 465 466 467 468 469 470 471 472

Kedifa implementation
---------------------

`Kedifa <https://lab.nexedi.com/nexedi/kedifa>`_ server runs on kedifa partition.

Each `caddy-frontend-N` partition downloads certificates from the kedifa server.

Caucase (exposed by ``kedifa-caucase-url`` in master partition parameters) is used to handle certificates for authentication to kedifa server.

If ``automatic-internal-kedifa-caucase-csr`` is enabled (by default it is) there are scripts running on master partition to simulate human to sign certificates for each caddy-frontend-N node.
473 474 475 476

Support for X-Real-Ip and X-Forwarded-For
-----------------------------------------

477
X-Forwarded-For and X-Real-Ip are transmitted to the backend, but only for IPv4 access to the frontend. In case of IPv6 access, the provided IP will be wrong, because of using 6tunnel.