An error occurred fetching the project authors.
- 20 Oct, 2020 7 commits
-
-
Łukasz Nowak authored
-
Łukasz Nowak authored
It's a dict, and in SlapOS usage of Jinja2 it's good to see the type of a variable immediately.
-
Łukasz Nowak authored
"parameter_dict" says nothing, whereas "software_parameter_dict" explains source and purpose of the information.
-
Łukasz Nowak authored
There is needless duplication of information.
-
Łukasz Nowak authored
-
Łukasz Nowak authored
That's true, that those are templates, but the important information which shall be in the name of the parameter is its purpose - a profile.
-
Łukasz Nowak authored
-
- 05 Oct, 2020 1 commit
-
-
Łukasz Nowak authored
Each node allows for global statistic access for full backend-haproxy, which is exposed using special frontend, and then transferred back to the master partition, so that the administrator can access it.
-
- 30 Sep, 2020 1 commit
-
-
Łukasz Nowak authored
Changes: * traffic_cop was removed, so use traffic_manager directly * logging.config was changed to logging.yaml * made records.config and storage.config similar to original files * proxy.config.admin.synthetic_port option was removed * proxy.config.process_manager.mgmt_port option was removed * test: ignore traffic.out in logs * test: update ATS version * pqsn field was removed and replaced with shn, so follow upstream: https://github.com/apache/trafficserver/commit/b0969c91ebc52b37f4c3195ec17d4d0c1c18650c * add a test to prove squid.log working, as upgrade resulted with not created file
-
- 15 Sep, 2020 1 commit
-
-
Łukasz Nowak authored
By using nginx it's possible to set it up to expose logs nicely with the real frontend. furl is used to rewrite URL from the frontend to add proper username and password information.
-
- 30 Jul, 2020 1 commit
-
-
Łukasz Nowak authored
Logs are critical for caddy-frontend, so let's configure rotate-num locally, as changes in the stack can come unattended, and can result with loosing logs.
-
- 17 Jul, 2020 2 commits
-
-
Łukasz Nowak authored
By default do not offer authentication certificate, the switch authenticate-to-backend can be used on cluster or slave level to control this feature.
-
Łukasz Nowak authored
This is needed in order to provide future support for client certificates to the backend. Also it means that haproxy is used in all cases, with or without cache, and as a result the "cached" version of caddy is dropped. Let haproxy setup maxconn by itself, as it's wise enough. Also trust that it'll detect and use proper limits, instead enforcing them in the shell with ulimit trick (ulimit -n $(ulimit -Hn)). As empty server alias can impact the configuration, add proper test for checking it.
-
- 22 Jun, 2020 2 commits
-
-
Łukasz Nowak authored
QUIC is not used at all, and became superseded by HTTP/3
-
Łukasz Nowak authored
Customized configuration support is not used since introduction of Caddy software, so there is no need to support it anymore.
-
- 02 Mar, 2020 1 commit
-
-
Łukasz Nowak authored
Instead of forcing to set monitor port in some cases, just generate them, so it's possible to correctly instantiate caddy-frontend on one partition scenario like in webrunner or tests.
-
- 20 Feb, 2020 1 commit
-
-
Łukasz Nowak authored
/reviewed-on nexedi/slapos!633
-
- 30 Aug, 2019 1 commit
-
-
Łukasz Nowak authored
It defaults to 600s, which is good reasonable chosen before.
-
- 18 Jul, 2019 1 commit
-
-
Łukasz Nowak authored
/reviewed-on nexedi/slapos!597
-
- 15 Apr, 2019 1 commit
-
-
Łukasz Nowak authored
This reverts commit 7993ff81. Custom configuration checks are hard to be trusted, as they can impact too many aspects of running frontend. Frontend administrator knows the risks of custom configuration, and shall take proper care. /reviewed-on nexedi/slapos!543
-
- 21 Mar, 2019 1 commit
-
-
Łukasz Nowak authored
Adapted configuration and instantiation to ATS 7. Deployment: * traffic_line has been replaced with traffic_ctl * access log, of squid style, is ascii instead of binary, to do so logging.config is generated * ip_allow.config is configured to allow access from any host * RFC 5861 (stale content on error or revalidate) is implemented with core instead with deprecated plugin * trafficserver-autoconf-port renamed to trafficserver-synthetic-port * proxy.config.system.mmap_max removed, as it is not used by the system anymore Tests: * As Via header is not returned to the client, it is dropped from the tests, instead its existence in the backend is checked. * Promise plugin trafficserver-cache-availability.py is re enabled, as it is expected to work immediately.
-
- 13 Mar, 2019 3 commits
-
-
Łukasz Nowak authored
-
Łukasz Nowak authored
AIKC - Automatic Internal Kedifa's Caucase CSR signing, which can be triggered by option automatic-internal-kedifa-caucase-csr. It signs all CSR which match csr_id and certificate from the nodes which needs them.
-
Łukasz Nowak authored
Use KeDiFa to store keys, and transmit the url to the requester for master and slave partitions. Download keys on the slave partitions level. Use caucase to fetch main caucase CA. kedifa-caucase-url is published in order to have access to it. Note: caucase is prepended with kedifa, as this is that one. Use kedifa-csr tool to generate CSR and use caucase-updater macro. Switch to KeDiFa with SSL Auth and updated goodies. KeDiFa endpoint URLs are randomised. Only one (first) user certificate is going to be automatically accepted. This one shall be operated by the cluster owner, the requester of frontend master partition. Then he will be able to sign certificates for other users and also for services - so each node in the cluster. Special trick from https://security.stackexchange.com/questions/74345/provide-subjectaltname-to-openssl-directly-on-command-line is used for one command generation of extensions in the certificate. Note: We could upgrade to openssl 1.1.1 in order to have it really simplified (see https://security.stackexchange.com/a/183973 ) Improve CSR readability by creating cluster-identification, which is master partition title, and use it as Organization of the CSR. Reserve slots for data exchange in KeDiFa.
-
- 08 Feb, 2019 1 commit
-
-
Łukasz Nowak authored
try_duration and try_interval are Caddy proxy's switches which allow to deal with non working backend (https://caddyserver.com/docs/proxy) The non working backend is the one, to which connection is lost or was not possible to make, without sending any data. The default try_duration=5s and try_interval=250ms are chosen, so that in normal network conditions (with all possible problems in the network, like lost packets) the browser will have to wait up to 5 seconds to be informed that backend is inaccessible or for the request to start being processed, but only a bit more than 250ms if Caddy would have to reestablish connection to faulty backend. In order to check it out it is advisable to setup a system, with real backend, like apache one, and configure iptables to randomly reject packets to it: iptables -A INPUT -m statistic --mode random -p tcp --dport <backend_port> \ --probability 0.05 -j REJECT --reject-with tcp-reset Using ab or any other tool will results with lot of 502 EOF in the Caddy error log and also reported by ab. With this configuration there are no more errors visible to the client, which come from the problems on the network between Caddy and the backend.
-
- 17 Jan, 2019 1 commit
-
-
Łukasz Nowak authored
One of solutions for random 502 errors from caddy is to fully disable HTTP2 protocol ( https://github.com/mholt/caddy/issues/1080 ) We run Caddy with HTTP2 enabled by default, as we can enable/disable it per each slave, but in some environments it might be just better to fully avoid HTTP2 codepaths in Caddy. /reviewed-on nexedi/slapos!495
-
- 14 Nov, 2018 1 commit
-
-
Łukasz Nowak authored
-
- 12 Sep, 2018 1 commit
-
-
Łukasz Nowak authored
Even if the master partition owner will authorise given slave for custom configuration reject this slave in case if it does not pass validation for snippet.
-
- 06 Sep, 2018 4 commits
-
-
Łukasz Nowak authored
Instead of relying on slapos.cookbook:certificate_authority recipe, which stops buildout processing, extract the minimal implementation to runtime key/certificate validator and reject slaves, which does not pass this test. This commits results in TODO item being done.
-
Łukasz Nowak authored
As slave requester is able to enter any string in server-alias validate it against being correct domain name and in case if validation fails reject that slave. Also use a trick to have access to global slave state, see https://fabianlee.org/2016/10/18/saltstack-setting-a-jinja2-variable-from-an-inner-block-scope/
-
Łukasz Nowak authored
Install validators dependency, which is a way to easily check if email is an email or domain is correct. As slave requester is able to enter any string in custom domain validate it against being correct domain name and in case if validation fails reject that slave.
-
Łukasz Nowak authored
Instead of needlessly storing information in configuration section, pass it via jinja2 parameter. This is safe, in case if extra_slave_instance_list would contain value like ${section:option}.
-
- 06 Aug, 2018 1 commit
-
-
Łukasz Nowak authored
/reviewed-on nexedi/slapos!368
-
- 31 Jul, 2018 2 commits
-
-
Łukasz Nowak authored
This option is not advertised and it is not needed at all in Caddy configuration.
-
Łukasz Nowak authored
Features: * jinja2 is used to generate instance templates * downloads are done the same way for all resources * create with shared content for all instance profiles * fill in instance-common with shared sections * render templates late in order to ease its extenension and development * drop not needd duplicated section * drop slap-parameter in frontend and replicate template * simplify monitor configuration * move instance-parameter to instance file Thanks to this only one and topmost profile is reponsible for parsing and passing through the information which comes from the network
-