README.caddy_frontend.rst 19.8 KB
Newer Older
1 2 3
==============
Caddy Frontend
==============
4

5
Frontend system using Caddy, based on apache-frontend software release, allowing to rewrite and proxy URLs like myinstance.myfrontenddomainname.com to real IP/URL of myinstance.
6

7
Caddy Frontend works using the master instance / slave instance design.  It means that a single main instance of Caddy will be used to act as frontend for many slaves.
8 9 10 11

Software type
=============

12 13 14 15 16
Caddy frontend is available in 4 software types:
  * ``default`` : The standard way to use the Caddy frontend configuring everything with a few given parameters
  * ``custom-personal`` : This software type allow each slave to edit its Caddy configuration file
  * ``default-slave`` : XXX
  * ``custom-personal-slave`` : XXX
17 18 19


About frontend replication
20 21 22
==========================

Slaves of the root instance are sent as a parameter to requested frontends which will process them. The only difference is that they will then return the would-be published information to the root instance instead of publishing it. The root instance will then do a synthesis and publish the information to its slaves. The replicate instance only use 5 type of parameters for itself and will transmit the rest to requested frontends.
23 24

These parameters are :
25 26 27 28 29 30 31 32

  * ``-frontend-type`` : the type to deploy frontends with. (default to 2)
  * ``-frontend-quantity`` : The quantity of frontends to request (default to "default")
  * ``-frontend-i-state``: The state of frontend i
  * ``-frontend-config-i-foo``: Frontend i will be requested with parameter foo
  * ``-frontend-software-release-url``: Software release to be used for frontends, default to the current software release
  * ``-sla-i-foo`` : where "i" is the number of the concerned frontend (between 1 and "-frontend-quantity") and "foo" a sla parameter.

33
for example::
34 35 36 37 38 39 40 41 42 43 44 45 46

  <parameter id="-frontend-quantity">3</parameter>
  <parameter id="-frontend-type">custom-personal</parameter>
  <parameter id="-frontend-2-state">stopped</parameter>
  <parameter id="-sla-3-computer_guid">COMP-1234</parameter>
  <parameter id="-frontend-software-release-url">https://lab.nexedi.com/nexedi/slapos/raw/someid/software/caddy-frontend/software.cfg</parameter>


will request the third frontend on COMP-1234. All frontends will be of software type ``custom-personal``. The second frontend will be requested with the state stopped

*Note*: the way slaves are transformed to a parameter avoid modifying more than 3 lines in the frontend logic.

**Important NOTE**: The way you ask for slave to a replicate frontend is the same as the one you would use for the software given in "-frontend-quantity". Do not forget to use "replicate" for software type. XXXXX So far it is not possible to do a simple request on a replicate frontend if you do not know the software_guid or other sla-parameter of the master instance. In fact we do not know yet the software type of the "requested" frontends. TO BE IMPLEMENTED
47 48 49

XXX Should be moved to specific JSON File

50 51 52 53
Extra-parameter per frontend with default::

  ram-cache-size = 1G
  disk-cache-size = 8G
54 55 56 57

How to deploy a frontend server
===============================

58 59 60
This is to deploy an entire frontend server with a public IPv4.  If you want to use an already deployed frontend to make your service available via ipv4, switch to the "Example" parts.

First, you will need to request a "master" instance of Caddy Frontend with:
61

62 63
  * A ``domain`` parameter where the frontend will be available
  * A ``public-ipv4`` parameter to state which public IPv4 will be used
64 65

like::
66

67 68 69 70 71 72
  <?xml version='1.0' encoding='utf-8'?>
  <instance>
   <parameter id="domain">moulefrite.org</parameter>
   <parameter id="public-ipv4">xxx.xxx.xxx.xxx</parameter>
  </instance>

73 74
Then, it is possible to request many slave instances (currently only from slapconsole, UI doesn't work yet) of Caddy Frontend, like::

75
  instance = request(
76
    software_release=caddy_frontend,
77 78 79 80 81
    partition_reference='frontend2',
    shared=True,
    partition_parameter_kw={"url":"https://[1:2:3:4]:1234/someresource"}
  )

82 83 84
Those slave instances will be redirected to the "master" instance, and you will see on the "master" instance the associated proper directives of all slave instances.

Finally, the slave instance will be accessible from: https://someidentifier.moulefrite.org.
85

86 87
About SSL and SlapOS Master Zero Knowledge
==========================================
88 89 90

**IMPORTANT**: One Caddy can not serve more than one specific SSL site and be compatible with obsolete browser (i.e.: IE8). See http://wiki.apache.org/httpd/NameBasedSSLVHostsWithSNI

91 92
SSL keys and certificates are directly send to the frontend cluster in order to follow zero knowledge principle of SlapOS Master.

93 94
*Note*: Until master partition or slave specific certificate is uploaded each slave is served with fallback certificate.  This fallback certificate is self signed, does not match served hostname and results with lack of response on HTTPs.

95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115
Obtaining CA for KeDiFa
-----------------------

KeDiFa uses caucase and so it is required to obtain caucase CA certificate used to sign KeDiFa SSL certificate, in order to be sure that certificates are sent to valid KeDiFa.

The easiest way to do so is to use caucase.

On some secure and trusted box which will be used to upload certificate to master or slave frontend partition install caucase https://pypi.org/project/caucase/

Master and slave partition will return key ``kedifa-caucase-url``, so then create and start a ``caucase-updater`` service::

  caucase-updater \
    --ca-url "${kedifa-caucase-url}" \
    --cas-ca "${frontend_name}.caucased.ca.crt" \
    --ca "${frontend_name}.ca.crt" \
    --crl "${frontend_name}.crl"

where ``frontend_name`` is a frontend cluster to which you will upload the certificate (it can be just one slave).

Make sure it is automatically started when trusted machine reboots: you want to have it running so you can forget about it. It will keep KeDiFa's CA certificate up to date when it gets renewed so you know you are still talking to the same service as when you previously uploaded the certificate, up to the original upload.

116 117 118 119 120 121 122 123 124 125 126 127 128
Master partition
----------------

After requesting master partition it will return ``master-key-generate-auth-url`` and ``master-key-upload-url``.

Doing HTTP GET on ``master-key-generate-auth-url`` will return authentication token, which is used to communicate with ``master-key-upload-url``. This token shall be stored securely.

By doing HTTP PUT to ``master-key-upload-url`` with appended authentication token it is possible to upload PEM bundle of certificate, key and any accompanying CA certificates to the master.

Example sessions is::

  request(...)

129
  curl -g -X GET --cacert "${frontend_name}.ca.crt" --crlfile "${frontend_name}.crl" master-key-generate-auth-url
130 131 132 133
  > authtoken

  cat certificate.pem key.pem ca-bundle.pem > master.pem

134
  curl -g -X PUT --cacert "${frontend_name}.ca.crt" --crlfile "${frontend_name}.crl" --data-binary @master.pem master-key-upload-url+authtoken
135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156

This replaces old request parameters:

 * ``apache-certificate``
 * ``apache-key``
 * ``apache-ca-certificate``

(*Note*: They are still supported for backward compatibility, but any value send to the ``master-key-upload-url`` will supersede information from SlapOS Master.)

Slave partition
---------------

After requesting slave partition it will return ``key-generate-auth-url`` and ``key-upload-url``.

Doing HTTP GET on ``key-generate-auth-url`` will return authentication token, which is used to communicate with ``key-upload-url``. This token shall be stored securely.

By doing HTTP PUT to ``key-upload-url`` with appended authentication token it is possible to upload PEM bundle of certificate, key and any accompanying CA certificates to the master.

Example sessions is::

  request(...)

157
  curl -g -X GET --cacert "${frontend_name}.ca.crt" --crlfile "${frontend_name}.crl" key-generate-auth-url
158 159 160 161
  > authtoken

  cat certificate.pem key.pem ca-bundle.pem > master.pem

162
  curl -g -X PUT --cacert "${frontend_name}.ca.crt" --crlfile "${frontend_name}.crl" --data-binary @master.pem key-upload-url+authtoken
163 164 165 166 167 168 169 170 171 172

This replaces old request parameters:

 * ``ssl_crt``
 * ``ssl_key``
 * ``ssl_ca_crt``

(*Note*: They are still supported for backward compatibility, but any value send to the ``key-upload-url`` will supersede information from SlapOS Master.)


173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189
How to have custom configuration in frontend server - XXX - to be written
=========================================================================

In your instance directory, you, as sysadmin, can directly edit two
configuration files that won't be overwritten by SlapOS to customize your
instance:

 * ``$PARTITION_PATH/srv/srv/apache-conf.d/apache_frontend.custom.conf``
 * ``$PARTITION_PATH/srv/srv/apache-conf.d/apache_frontend.virtualhost.custom.conf``

The first one is included in the end of the main apache configuration file.
The second one is included in the virtualhost of the main apache configuration file.

SlapOS will just create those two files for you, then completely forget them.

*Note*: make sure that the UNIX user of the instance has read access to those
files if you edit them.
190 191 192 193 194 195 196

Instance Parameters
===================

Master Instance Parameters
--------------------------

197
The parameters for instances are described at `instance-caddy-input-schema.json <instance-caddy-input-schema.json>`_.
198 199 200 201 202

Here some additional informations about the parameters listed, below:

domain
~~~~~~
203 204 205

Name of the domain to be used (example: mydomain.com). Sub domains of this domain will be used for the slave instances (example: instance12345.mydomain.com). It is then recommended to add a wild card in DNS for the sub domains of the chosen domain like::

206
  *.mydomain.com. IN A 123.123.123.123
207 208

Using the IP given by the Master Instance.  "domain" is a mandatory Parameter.
209 210 211

public-ipv4
~~~~~~~~~~~
212
Public ipv4 of the frontend (the one Caddy will be indirectly listening to)
213 214 215

port
~~~~
216
Port used by Caddy. Optional parameter, defaults to 4443.
217 218 219

plain_http_port
~~~~~~~~~~~~~~~
220
Port used by Caddy to serve plain http (only used to redirect to https).
221 222 223 224 225 226
Optional parameter, defaults to 8080.


Slave Instance Parameters
-------------------------

227
The parameters for instances are described at `instance-slave-caddy-input-schema.json <instance-slave-caddy-input-schema.json>`_.
228 229 230 231 232 233 234

Here some additional informations about the parameters listed, below:

path
~~~~
Only used if type is "zope".

235
Will append the specified path to the "VirtualHostRoot" of the zope's VirtualHostMonster.
236 237 238 239 240 241

"path" is an optional parameter, ignored if not specified.
Example of value: "/erp5/web_site_module/hosting/"

url
~~~
242 243 244 245
Necessary to activate cache. ``url`` of backend to use.

``url`` is an optional parameter.

246 247 248 249
Example: http://mybackend.com/myresource

domain
~~~~~~
250 251 252 253 254 255 256

Necessary to activate cache.

The frontend will be accessible from this domain.

``domain`` is an optional parameter.

257 258 259 260
Example: www.mycustomdomain.com

enable_cache
~~~~~~~~~~~~
261 262 263 264

Necessary to activate cache.

``enable_cache`` is an optional parameter.
265

266 267 268 269 270 271 272 273
Functionalities for Caddy configuration
---------------------------------------

In the slave Caddy configuration you can use parameters that will be replaced during instantiation. They should be entered as python templates parameters ex: ``%(parameter)s``:

  * ``cache_access`` : url of the cache. Should replace backend url in configuration to use the cache
  * ``access_log`` : path of the slave error log in order to log in a file.
  * ``error_log`` : path of the slave access log in order to log in a file.
274
  * ``certificate`` : path to the certificate
275 276 277 278 279


Examples
========

280
Here are some example of how to make your SlapOS service available through an already deployed frontend.
281 282 283 284 285 286

Simple Example (default)
------------------------

Request slave frontend instance so that https://[1:2:3:4:5:6:7:8]:1234 will be
redirected and accessible from the proxy::
287

288
  instance = request(
289
    software_release=caddy_frontend,
290 291 292 293 294 295 296 297 298 299 300 301 302 303 304
    software_type="RootSoftwareInstance",
    partition_reference='my frontend',
    shared=True,
    partition_parameter_kw={
        "url":"https://[1:2:3:4:5:6:7:8]:1234",
    }
  )


Zope Example (default)
----------------------

Request slave frontend instance using a Zope backend so that
https://[1:2:3:4:5:6:7:8]:1234 will be redirected and accessible from the
proxy::
305

306
  instance = request(
307
    software_release=caddy_frontend,
308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324
    software_type="RootSoftwareInstance",
    partition_reference='my frontend',
    shared=True,
    partition_parameter_kw={
        "url":"https://[1:2:3:4:5:6:7:8]:1234",
        "type":"zope",
    }
  )


Advanced example 
-----------------

Request slave frontend instance using a Zope backend, with Varnish activated,
listening to a custom domain and redirecting to /erp5/ so that
https://[1:2:3:4:5:6:7:8]:1234/erp5/ will be redirected and accessible from
the proxy::
325

326
  instance = request(
327
    software_release=caddy_frontend,
328 329 330 331 332 333 334 335 336 337 338 339 340 341 342
    software_type="RootSoftwareInstance",
    partition_reference='my frontend',
    shared=True,
    partition_parameter_kw={
        "url":"https://[1:2:3:4:5:6:7:8]:1234",
        "enable_cache":"true",
        "type":"zope",
        "path":"/erp5",
        "domain":"mycustomdomain.com",
    }
  )

Simple Example 
---------------

343 344
Request slave frontend instance so that https://[1:2:3:4:5:6:7:8]:1234 will be::

345
  instance = request(
346
    software_release=caddy_frontend,
347 348 349 350 351 352 353
    software_type="RootSoftwareInstance",
    partition_reference='my frontend',
    shared=True,
    software_type="custom-personal",
    partition_parameter_kw={
        "url":"https://[1:2:3:4:5:6:7:8]:1234",

354 355 356 357
Simple Cache Example - XXX - to be written
------------------------------------------

Request slave frontend instance so that https://[1:2:3:4:5:6:7:8]:1234 will be::
358 359

  instance = request(
360
    software_release=caddy_frontend,
361 362 363 364 365 366 367 368 369
    software_type="RootSoftwareInstance",
    partition_reference='my frontend',
    shared=True,
    software_type="custom-personal",
    partition_parameter_kw={
        "url":"https://[1:2:3:4:5:6:7:8]:1234",
	"domain": "www.example.org",
	"enable_cache": "True",

370 371
Advanced example - XXX - to be written
--------------------------------------
372 373

Request slave frontend instance using custom apache configuration, willing to use cache and ssl certificates.
374
Listening to a custom domain and redirecting to /erp5/ so that
375 376
https://[1:2:3:4:5:6:7:8]:1234/erp5/ will be redirected and accessible from
the proxy::
377

378
  instance = request(
379
    software_release=caddy_frontend,
380 381 382 383 384 385 386 387 388 389 390 391
    software_type="RootSoftwareInstance",
    partition_reference='my frontend',
    shared=True,
    software_type="custom-personal",
    partition_parameter_kw={
        "url":"https://[1:2:3:4:5:6:7:8]:1234",
        "enable_cache":"true",
        "type":"zope",
        "path":"/erp5",
        "domain":"example.org",

    "ssl_key":"-----BEGIN RSA PRIVATE KEY-----
392 393 394 395 396 397 398 399 400 401 402
  XXXXXXX..........XXXXXXXXXXXXXXX
  -----END RSA PRIVATE KEY-----",
      "ssl_crt":'-----BEGIN CERTIFICATE-----
  XXXXXXXXXXX.............XXXXXXXXXXXXXXXXXXX
  -----END CERTIFICATE-----',
      "ssl_ca_crt":'-----BEGIN CERTIFICATE-----
  XXXXXXXXX...........XXXXXXXXXXXXXXXXX
  -----END CERTIFICATE-----',
      "ssl_csr":'-----BEGIN CERTIFICATE REQUEST-----
  XXXXXXXXXXXXXXX.............XXXXXXXXXXXXXXXXXX
  -----END CERTIFICATE REQUEST-----',
403 404 405
    }
  )

Łukasz Nowak's avatar
Łukasz Nowak committed
406 407 408
QUIC Protocol
=============

409
Note: QUIC support in Caddy is really experimental. It can result with silently having problems with QUIC connections or hanging Caddy process. So in case of QUIC error ``QUIC_NETWORK_IDLE_TIMEOUT`` or ``QUIC_PEER_GOING_AWAY`` it is required to restart caddy process.
Łukasz Nowak's avatar
Łukasz Nowak committed
410

411 412
Note: Chrome will refuse to connect to QUIC on different port then HTTPS has been served. As Caddy binds to high ports, if QUIC is wanted, the browser need to connect to high port too.

413
Experimental QUIC available in Caddy is not configurable. If caddy is configured to bind to HTTPS port ``${port}``, QUIC is going to be advertised on this port only. It is not possible to configure another public port in case of port rewriting.
Łukasz Nowak's avatar
Łukasz Nowak committed
414

415 416 417
So it is required to ``DNAT`` from ``${public IP}`` of the computer to the computer partition running caddy ``${local IP}`` with configured port::

  iptables -A DNAT -d ${public IP}/32 -p udp -m udp --dport ${port} -j DNAT --to-destination ${local IP}:${port}
Łukasz Nowak's avatar
Łukasz Nowak committed
418 419


420 421 422 423 424 425 426 427 428 429 430
Promises
========

Note that in some cases promises will fail:

 * not possible to request frontend slave for monitoring (monitoring frontend promise)
 * no slaves present (configuration promise and others)
 * no cached slave present (configuration promise and others)

This is known issue and shall be tackled soon.

431 432 433 434 435 436 437
KeDiFa
======

Additional partition with KeDiFa (Key Distribution Facility) is by default requested on the same computer as master frontend partition.

By adding to the request keys like ``-sla-kedifa-<key>`` it is possible to provide SLA information for kedifa partition. Eg to put it on computer ``couscous`` it shall be ``-sla-kedifa-computer_guid: couscous``.

438 439 440 441 442 443
Notes
=====

It is not possible with slapos to listen to port <= 1024, because process are
not run as root.

444 445
Solution 1 (iptables)
---------------------
446 447

It is a good idea then to go on the node where the instance is
448
and set some ``iptables`` rules like (if using default ports)::
449 450 451

  iptables -t nat -A PREROUTING -p tcp -d {public_ipv4} --dport 443 -j DNAT --to-destination {listening_ipv4}:4443
  iptables -t nat -A PREROUTING -p tcp -d {public_ipv4} --dport 80 -j DNAT --to-destination {listening_ipv4}:8080
452 453
  ip6tables -t nat -A PREROUTING -p tcp -d {public_ipv6} --dport 443 -j DNAT --to-destination {listening_ipv6}:4443
  ip6tables -t nat -A PREROUTING -p tcp -d {public_ipv6} --dport 80 -j DNAT --to-destination {listening_ipv6}:8080
454

455
Where ``{public_ipv[46]}`` is the public IP of your server, or at least the LAN IP to where your NAT will forward to, and ``{listening_ipv[46]}`` is the private ipv4 (like 10.0.34.123) that the instance is using and sending as connection parameter.
456

457 458 459 460 461 462 463 464 465
Additionally in order to access the server by itself such entries are needed in ``OUTPUT`` chain (as the internal packets won't appear in the ``PREROUTING`` chain)::

  iptables -t nat -A OUTPUT -p tcp -d {public_ipv4} --dport 443 -j DNAT --to {listening_ipv4}:4443
  iptables -t nat -A OUTPUT -p tcp -d {public_ipv4} --dport 80 -j DNAT --to {listening_ipv4}:8080
  ip6tables -t nat -A OUTPUT -p tcp -d {public_ipv6} --dport 443 -j DNAT --to {listening_ipv6}:4443
  ip6tables -t nat -A OUTPUT -p tcp -d {public_ipv6} --dport 80 -j DNAT --to {listening_ipv6}:8080

Solution 2 (network capability)
-------------------------------
466

467
It is also possible to directly allow the service to listen on 80 and 443 ports using the following command::
468

469
  setcap 'cap_net_bind_service=+ep' /opt/slapgrid/$CADDY_FRONTEND_SOFTWARE_RELEASE_MD5/go.work/bin/caddy
470 471 472
  setcap 'cap_net_bind_service=+ep' /opt/slapgrid/$CADDY_FRONTEND_SOFTWARE_RELEASE_MD5/parts/6tunnel/bin/6tunnel

Then specify in the master instance parameters:
473

474 475
 * set ``port`` to ``443``
 * set ``plain_http_port`` to ``80``
476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493

Technical notes
===============

Instantiated cluster structure
------------------------------

Instantiating caddy-frontend results with a cluster in various partitions:

 * master (the controlling one)
 * kedifa (contains kedifa server)
 * caddy-frontend-N which contains the running processes to serve sites - this partition can be replicated by ``-frontend-quantity`` parameter

So it means sites are served in `caddy-frontend-N` partition, and this partition is structured as:

 * Caddy serving the browser
 * (optional) Apache Traffic Server for caching
 * Caddy connected to the backend
494 495 496 497 498 499 500 501 502 503 504

Kedifa implementation
---------------------

`Kedifa <https://lab.nexedi.com/nexedi/kedifa>`_ server runs on kedifa partition.

Each `caddy-frontend-N` partition downloads certificates from the kedifa server.

Caucase (exposed by ``kedifa-caucase-url`` in master partition parameters) is used to handle certificates for authentication to kedifa server.

If ``automatic-internal-kedifa-caucase-csr`` is enabled (by default it is) there are scripts running on master partition to simulate human to sign certificates for each caddy-frontend-N node.