Commit 8eeb74fe authored by Kazuhiko Shiozaki's avatar Kazuhiko Shiozaki

software/erp5-zope2: revive Zope2 ERP5 software release.

parent 715c6b7c
Pipeline #26987 passed with stage
in 0 seconds
Available ``software-type`` values
==================================
- ``default``
Recommended for development and production use. Automatic creation of
erp5-site.
Notes
=====
This software release is not intended to be accessed directly, but through a
front-end instance which is expected to contains the RewriteRules_ (or
equivalent) needed to relocate Zope's urls via its VirtualHostMonster_. See the
``frontend`` erp5 instance parameter.
ERP5 defaults connect to the public cloudooo on https://cloudooo.erp5.net/.
See the ``cloudooo`` Software Release to setup a cloudooo cluster if necessary.
Replication
===========
Replication allows setting up an ERP5 instance whose data follows another
instance.
Relations between ERP5 instances in a replication graph depend in what is
supported by individual data managers (ex: a neo cluster can replicate from a
neo cluster which itself replicates from a 3rd).
Replication lag constraints (aka sync/async replication) depends on individual
data managers (ex: neo replication between clusters is always asynchronous).
Ignoring replication lag, replicated data can be strictly identical (ex:
replicating ZODB or SQL database will contain the same data as upstream), or
may imply some remaping (ex: replicating Zope logs from an instance with 2 zope
families with 2 partition of 2 zopes each to an instance with a single zope
total).
Data whose replication is supported
-----------------------------------
- neo database
Data whose replication will eventually be supported
---------------------------------------------------
- mariadb database
- zope ``zope-*-access.log`` and ``zope-*-Z2.log``
- ``mariadb-slow.log``
Data whose replication is not planned
-------------------------------------
- zeo: use neo instead
Setting up replication
----------------------
In addition to your usual parameter set, you needs to provide the following parameters::
{
"zope-partition-dict": {}, So no zope is instantiated
"zodb": [
{
"storage-dict": {
"upstream-masters": ..., As published by to-become upstream ERP5 instance as "neo-masters"
},
"type": "neo", The only ZODB type supporting replication
...
}
...
]
...
}
Port ranges
===========
This software release assigns the following port ranges by default:
==================== ==========
Partition type Port range
==================== ==========
memcached-persistent 2000-2009
memcached-volatile 2010-2019
smtp 2025-2029
neo (admin, master) 2050-2052
mariadb 2099
zeo 2100-2149
balancer 2150-2199
zope 2200-*
jupyter 8888
caucase 8890,8891
==================== ==========
Non-zope partitions are unique in an ERP5 cluster, so you shouldn't have to
care about them as a user (but a Software Release developer needs to know
them).
Zope partitions should be assigned port ranges starting at 2200, incrementing
by some value which depends on how many zope process you want per partition
(see the ``port-base`` parameter in ``zope-partition-dict``).
Notes to the Software Release developer: These ranges are not strictly
defined. Not each port is actually used so one may reduce alread-assigned
ranges if needed (ex: memcached partitions use actually fewer ports). There
should be enough room for evolution (as between smtp and mariadb types). It is
important to not allocate any port after 2200 as user may have assigned ports
to his zope processes.
.. _RewriteRules: http://httpd.apache.org/docs/current/en/mod/mod_rewrite.html#rewriterule
.. _VirtualHostMonster: http://docs.zope.org/zope2/zope2book/VirtualHosting.html
\ No newline at end of file
{
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Parameters to instantiate ERP5",
"type": "object",
"additionalProperties": false,
"definitions": {
"routing-rule-list": {
"description": "Maps the path received in requests to given zope path. Rules are applied in the order they are given. This requires the path received from the outside world (typically: frontend) to have its root correspond to Zope's root (for frontend: 'path' parameter must be empty), with the customary VirtualHostMonster construct (for frontend: 'type' must be 'zope').",
"type": "array",
"default": [
[
"/",
"/"
]
],
"items": {
"type": "array",
"minItems": 2,
"maxItems": 2,
"items": [
{
"title": "External path",
"description": "Path as received from the outside world, based on VirtualHostRoot element.",
"type": "string"
},
{
"title": "Internal path",
"description": "Zope path, based on Zope root object, the external path should correspond to. '%(site-id)s' is replaced by the site-id value, and '%%' replaced by '%'.",
"type": "string"
}
]
}
},
"tcpv4port": {
"$ref": "./schemas-definitions.json#/tcpv4port"
}
},
"properties": {
"sla-dict": {
"description": "Where to request instances. Each key is a query string for criterions (e.g. \"computer_guid=foo\"), and each value is a list of partition references (note: Zope partitions reference must be prefixed with \"zope-\").",
"additionalProperties": {
"type": "array",
"items": {
"type": "string"
},
"uniqueItems": true
},
"type": "object"
},
"site-id": {
"description": "ERP5Site object's id. An empty value disables automatic site creation.",
"default": "erp5",
"type": "string"
},
"bt5": {
"description": "Business Template to install at automatic site creation. By default, all configurators are installed.",
"type": "string"
},
"id-store-interval": {
"description": "Set Store Interval of default SQL Non Continuous Increasing Id Generator at automatic site creation. If unset, the value from the erp5_core Business Template is not touched.",
"type": "integer"
},
"timezone": {
"description": "Zope's timezone. Possible values are determined by host's libc, and typically come from a separate package (tzdata, ...)",
"default": "UTC",
"type": "string"
},
"deadlock-debugger-password": {
"description": "Password for /manage_debug_threads",
"type": "string"
},
"inituser-login": {
"description": "Login of the initial/rescue user",
"default": "zope",
"type": "string"
},
"inituser-password": {
"description": "Password of the initial/rescue user",
"type": "string"
},
"developer-list": {
"description": "List of logins which should get the Developer role (required to modify portal_components' content), defaulting to inituser-login's value",
"items": {
"pattern": "^\\S+$",
"type": "string"
},
"uniqueItems": true,
"type": "array"
},
"activity-timeout": {
"description": "How long a CMFActivity-initiated transaction may last, in seconds",
"default": null,
"type": [
"number",
"null"
]
},
"publisher-timeout": {
"description": "How long a publisher-initiated transaction may last, in seconds",
"default": 300,
"type": [
"number",
"null"
]
},
"family-override": {
"description": "Family-wide options, possibly overriding global options",
"default": {},
"patternProperties": {
".*": {
"default": {},
"properties": {
"webdav": {
"description": "Serve webdav queries, implies timerserver-interval=0 (disabled)",
"default": false,
"type": "boolean"
},
"activity-timeout": {
"description": "Override global activity timeout",
"type": [
"number",
"null"
]
},
"publisher-timeout": {
"description": "Override global publisher timeout",
"type": [
"number",
"null"
]
}
},
"type": "object"
}
},
"type": "object"
},
"hostalias-dict": {
"description": "Hostname-to-domain-name mapping",
"default": {},
"additionalProperties": {
"description": "A hostname to which current entry will resolve",
"type": "string"
},
"type": "object"
},
"hosts-dict": {
"description": "Host entries to be used in addition to and/or overriding auto-generated ones (erp5-catalog-0, erp5-memcached-persistent, erp5-memcached-volatile and erp5-smtp)",
"patternProperties": {
".*": {
"description": "An IP or domain name to which current entry will resolve",
"type": "string"
}
},
"type": "object"
},
"frontend": {
"description": "Front-end slave instance request parameters",
"properties": {
"software-url": {
"description": "Front-end's software type. If this parameter is empty, no front-end instance is requested. Else, sla-dict must specify 'frontend' which is a special value matching all frontends (e.g. {\"instance_guid=bar\": [\"frontend\"]}).",
"default": "",
"type": "string",
"format": "uri"
},
"domain": {
"description": "The domain name to request front-end to respond as.",
"default": "",
"type": "string"
},
"software-type": {
"description": "Request a front-end slave instance of this software type.",
"default": "RootSoftwareInstance",
"type": "string"
},
"virtualhostroot-http-port": {
"description": "Front-end slave http port. Port where http requests to frontend will be redirected.",
"default": 80,
"type": "integer"
},
"virtualhostroot-https-port": {
"description": "Front-end slave https port. Port where https requests to frontend will be redirected.",
"default": 443,
"type": "integer"
}
},
"type": "object"
},
"wsgi": {
"description": "If set to true, Zope is run as a WSGI application, instead of using the Medusa HTTP server.",
"type": "boolean",
"default": true
},
"zope-partition-dict": {
"description": "Zope layout definition",
"default": {
"1": {}
},
"patternProperties": {
".*": {
"additionalProperties": false,
"properties": {
"family": {
"description": "The family this partition is part of. For example: 'public', 'admin', 'backoffice', 'web-service'... Each family gets its own balancer entry. It has no special meaning for the system.",
"default": "default",
"type": "string"
},
"instance-count": {
"description": "Number of Zopes to setup on this partition",
"default": 1,
"type": "integer"
},
"thread-amount": {
"description": "Number of worker threads for each created Zope process",
"default": 4,
"type": "integer"
},
"timerserver-interval": {
"description": "Timerserver tick period, in seconds, or 0 to disable",
"default": 1,
"type": "number"
},
"private-dev-shm": {
"description": "Size of private /dev/shm for wendelin.core. If sysctl kernel.unprivileged_userns_clone exists, it must be set to 1.",
"type": "string"
},
"ssl-authentication": {
"title": "Enable SSL Client authentication on this zope instance.",
"description": "If set to true, will set SSL Client verification to required on apache VirtualHost which allow to access this zope instance.",
"type": "boolean",
"default": false
},
"longrequest-logger-interval": {
"description": "Period, in seconds, with which LongRequestLogger polls worker thread stack traces, or -1 to disable",
"default": -1,
"type": "integer"
},
"longrequest-logger-timeout": {
"description": "Transaction duration after which LongRequestLogger will start logging its stack trace, in seconds",
"default": 1,
"type": "integer"
},
"large-file-threshold": {
"description": "Requests bigger than this size get saved into a temporary file instead of being read completely into memory, in bytes",
"default": "10MB",
"type": "string"
},
"port-base": {
"allOf": [
{
"$ref": "#/definitions/tcpv4port"
},
{
"description": "Start allocating ports at this value. Useful if one needs to make several partitions share the same port range (ie, several partitions bound to a single address)",
"default": 2200
}
]
}
},
"type": "object"
}
},
"type": "object"
},
"kumofs": {
"description": "Persistent memcached service",
"allOf": [
{
"$ref": "./instance-kumofs-schema.json"
},
{
"properties": {
"tcpv4-port": {
"default": 2000
}
}
}
],
"type": "object"
},
"memcached": {
"description": "Volatile memcached service",
"allOf": [
{
"$ref": "./instance-kumofs-schema.json"
},
{
"properties": {
"tcpv4-port": {
"default": 2010
}
}
}
],
"type": "object"
},
"cloudooo-url-list": {
"description": "Format conversion service URLs",
"type": "array",
"items": {
"pattern": "^https?://",
"type": "string",
"format": "uri"
}
},
"cloudooo-retry-count": {
"description": "Define retry count for cloudooo in network error case in test",
"type": "integer",
"default": 2
},
"smtp": {
"description": "Mail queuing and relay service",
"allOf": [
{
"$ref": "./instance-smtp-schema.json"
},
{
"properties": {
"tcpv4-port": {
"default": 2010
}
}
}
],
"type": "object"
},
"mariadb": {
"description": "Relational database service",
"allOf": [
{
"$ref": "./instance-mariadb-schema.json"
},
{
"properties": {
"tcpv4-port": {
"default": 2099
}
}
}
],
"type": "object"
},
"zodb-zeo": {
"description": "Common settings ZEO servers",
"additionalProperties": false,
"properties": {
"tcpv4-port": {
"allOf": [
{
"$ref": "#/definitions/tcpv4port"
},
{
"description": "Start allocating ports at this value, going upward"
}
]
},
"backup-periodicity": {
"description": "When to backup, specified in the same format as for systemd.time(7) calendar events (years & seconds not supported, DoW & DoM can not be combined). Enter 'never' to disable backups.",
"default": "daily",
"type": "string"
},
"tidstorage-repozo-path": {
"description": "Directory for backup timestamp and tidstorage status files.",
"default": "~/srv/backup/tidstorage",
"type": "string"
}
},
"type": "object"
},
"zodb": {
"description": "Zope Object DataBase mountpoints. See https://github.com/zopefoundation/ZODB/blob/4/src/ZODB/component.xml for extra options.",
"items": {
"required": [
"type"
],
"properties": {
"name": {
"description": "Database name",
"default": "main",
"type": "string"
},
"mount-point": {
"description": "Mount point",
"default": "/",
"type": "string"
},
"storage-dict": {
"description": "Storage configuration. For NEO, 'logfile' is automatically set (see https://lab.nexedi.com/nexedi/neoppod/blob/master/neo/client/component.xml for other settings).",
"properties": {
"ssl": {
"description": "For external NEO. Pass false if you want to disable SSL or pass custom values for ca/cert/key.",
"default": true,
"type": "boolean"
}
},
"patternProperties": {
".!$": {
"$ref": "#/properties/zodb/items/patternProperties/.!$"
}
},
"additionalProperties": {
"$ref": "#/properties/zodb/items/additionalProperties"
},
"type": "object"
},
"type": true,
"server": true
},
"oneOf": [
{
"title": "zeo",
"properties": {
"type": {
"description": "Storage type",
"const": "zeo"
},
"server": {
"description": "Instantiate a server. If missing, 'storage-dict' must contain the necessary properties to mount the ZODB. The partition reference is 'zodb'.",
"$ref": "./instance-zeo-schema.json"
}
}
},
{
"title": "neo",
"properties": {
"type": {
"description": "Storage type",
"const": "neo"
},
"server": {
"description": "Instantiate a server. If missing, 'storage-dict' must contain the necessary properties to mount the ZODB. Partitions references are 'neo-0', 'neo-1', ...",
"$ref": "../neoppod/instance-neo-input-schema.json#/definitions/neo-cluster"
}
}
}
],
"patternProperties": {
".!$": {
"description": "Override with the value of the first item whose zope id matches against the pattern.",
"items": {
"items": [
{
"description": "Override pattern (Python regular expression).",
"type": "string"
},
{
"description": "Override value (parameter for matching nodes).",
"type": [
"integer",
"string"
]
}
],
"type": "array"
},
"type": "array"
}
},
"additionalProperties": {
"type": [
"integer",
"string"
]
},
"type": "object"
},
"type": "array"
},
"jupyter": {
"description": "Jupyter subinstance parameters",
"additionalProperties": false,
"properties": {
"enable": {
"description": "Whether to enable creation of associated Jupyter subinstance",
"default": false,
"type": "boolean"
},
"zope-family": {
"description": "Zope family to connect Jupyter to by default",
"default": "<first instantiated Zope family>",
"type": "string"
}
},
"type": "object"
},
"wcfs": {
"description": "Parameters for wendelin.core filesystem",
"additionalProperties": false,
"properties": {
"enable": {
"description": "Whether to enable WCFS filesystem and use it to access ZBigArray/ZBigFile data. In WCFS mode wendelin.core clients (Zope/ERP5 processes) share in-RAM cache for in-ZODB data without duplicating it for every client. This cache sharing does not affect correctness as isolation property is continued to be provided to every client.",
"default": false,
"type": "boolean"
}
}
},
"wendelin-core-zblk-fmt": {
"description": "In wendelin.core there are 2 formats for storing data, so called ZBlk0 and ZBlk1. See https://lab.nexedi.com/nexedi/wendelin.core/blob/2e5e1d3d/bigfile/file_zodb.py#L19 for more details.",
"default": "",
"type": "string"
},
"caucase": {
"description": "Caucase certificate authority parameters",
"allOf": [
{
"properties": {
"url": {
"title": "Caucase URL",
"description": "URL of existing caucase instance to use. If empty, a new caucase instance will be deployed. If not empty, other properties in this section will be ignored.",
"default": "",
"type": "string",
"format": "uri"
}
}
},
{
"$ref": "../caucase/instance-caucase-input-schema.json"
}
],
"type": "object"
},
"test-runner": {
"description": "Test runner parameters.",
"additionalProperties": false,
"properties": {
"enabled": {
"description": "Generate helper scripts to run test suite.",
"default": true,
"type": "boolean"
},
"coverage": {
"type": "object",
"title": "Coverage",
"description": "Coverage configuration",
"additionalProperties": false,
"properties": {
"enabled": {
"description": "Collect python coverage data during test run.",
"default": false,
"type": "boolean"
},
"include": {
"description": "File name patterns to include in coverage data, relative to software buildout's directory. Default to all repositories defined in software by ${erp5_repository_list:repository_id_list}.",
"type": "array",
"items": {
"type": "string"
},
"examples": [
[
"parts/erp5/*",
"parts/custom-repository/*",
"develop-eggs/custom-egg/*"
]
]
},
"branch": {
"description": "Enable branch coverage",
"type": "boolean",
"default": false
},
"upload-url": {
"description": "URL to upload coverage data. This is interpreted as a RFC 6570 URI Template, with the following parameters: test_name, test_result_id and test_result_revision. The request will be a PUT request with the coverage file content as body, suitable for WebDav servers. If the URL contains user and password, they will be used to attempt authentication using Digest and Basic authentication schemes.",
"type": "string",
"format": "uri",
"examples": [
"https://user:password@example.com/{test_result_id}/{test_name}.coverage.sqlite3"
]
}
}
},
"node-count": {
"description": "Number of tests this instance can execute in parallel. This must be at least equal to the number of nodes configured on testnode running the test",
"default": 3,
"type": "integer"
},
"extra-database-count": {
"description": "Number of extra databases this instance tests will need.",
"default": 3,
"type": "integer"
},
"selenium": {
"default": {
"target": "firefox"
},
"examples": [
{
"target": "selenium-server",
"server-url": "https://selenium.example.com",
"desired-capabilities": {
"browserName": "firefox",
"version": "68.0.2esr",
"acceptInsecureCerts": true
}
},
{
"target": "selenium-server",
"server-url": "https://selenium.example.com",
"desired-capabilities": {
"browserName": "chrome",
"version": "91.0.4472.101"
}
}
],
"oneOf": [
{
"type": "object",
"title": "Selenium Server",
"description": "Configuration for Selenium server",
"additionalProperties": false,
"required": [
"desired-capabilities",
"server-url",
"target"
],
"properties": {
"target": {
"description": "Target system",
"type": "string",
"const": "selenium-server",
"default": "selenium-server"
},
"server-url": {
"description": "URL of the selenium server",
"type": "string",
"format": "uri"
},
"verify-server-certificate": {
"description": "Verify the SSL/TLS certificate of the selenium server when using HTTPS",
"type": "boolean",
"default": true
},
"server-ca-certificate": {
"description": "PEM encoded bundle of CA certificates to verify the SSL/TLS certificate of the selenium server when using HTTPS",
"type": "string",
"default": "Root certificates from http://certifi.io/en/latest/"
},
"desired-capabilities": {
"description": "Desired browser capabilities",
"required": [
"browserName"
],
"type": "object",
"properties": {
"browserName": {
"description": "Name of the browser being used",
"type": "string",
"examples": [
"firefox",
"chrome",
"safari"
]
},
"version": {
"description": "The browser version",
"type": "string"
}
}
}
}
},
{
"type": "object",
"title": "Firefox",
"description": "Configuration for using firefox running as a sub-process",
"additionalProperties": false,
"properties": {
"target": {
"description": "Target system",
"const": "firefox",
"type": "string",
"default": "firefox"
}
}
}
]
},
"random-activity-priority": {
"type": "string",
"title": "Random Activity Priority",
"description": "Control `random_activity_priority` argument of test runner. Can be set to an empty string to automatically generate a seed for each test."
}
},
"type": "object"
},
"balancer": {
"description": "HTTP(S) load balancer proxy parameters",
"properties": {
"path-routing-list": {
"$ref": "#/definitions/routing-rule-list",
"title": "Global path routing rules"
},
"family-path-routing-dict": {
"type": "object",
"title": "Family-specific path routing rules",
"description": "Applied, only for the eponymous family, before global path routing rules.",
"patternProperties": {
".+": {
"$ref": "#/definitions/routing-rule-list"
}
}
},
"ssl": {
"description": "HTTPS certificate generation parameters",
"additionalProperties": false,
"properties": {
"frontend-caucase-url-list": {
"title": "Frontend Caucase URL List",
"description": "List of URLs of caucase service of frontend groups to authenticate access from them.",
"type": "array",
"items": {
"type": "string",
"format": "uri"
},
"uniqueItems": true
},
"caucase-url": {
"title": "Caucase URL",
"description": "URL of caucase service to use. If not set, global setting will be used.",
"type": "string",
"format": "uri"
},
"csr": {
"title": "csr",
"description": "PEM-encoded certificate signature request to request server certificate with. If not provided, HTTPS will be disabled.",
"type": "string"
},
"max-crl-update-delay": {
"title": "Periodicity of CRL update (days)",
"description": "CRL will be updated from caucase at least this often.",
"type": "number",
"default": 1.0
}
},
"type": "object"
}
},
"type": "object"
}
}
}
{
"$schema": "http://json-schema.org/draft-04/schema#",
"description": "Values returned by ERP5 instantiation",
"additionalProperties": false,
"properties": {
"hosts-dict": {
"description": "Hosts mapping, including auto-generated entries",
"patternProperties": {
".*": {
"description": "IP current entry resolves to",
"type": "string"
}
},
"type": "object"
},
"site-id": {
"description": "Chosen ERP5Site object identifier",
"type": "string"
},
"inituser-login": {
"description": "Initial user login",
"type": "string"
},
"inituser-password": {
"description": "Initial user password",
"type": "string"
},
"deadlock-debugger-password": {
"description": "Deadlock debugger password",
"type": "string"
},
"memcached-persistent-url": {
"description": "Persistent memcached access information",
"pattern": "^memcached://",
"type": "string"
},
"memcached-volatile-url": {
"description": "Volatile memcached access information",
"pattern": "^memcached://",
"type": "string"
},
"mariadb-database-list": {
"description": "Relational database access information",
"items": {
"pattern": "^mysql://",
"type": "string"
},
"uniqueItems": true,
"type": "array"
},
"mariadb-test-database-list": {
"description": "Relational database access information",
"items": {
"pattern": "^mysql://",
"type": "string"
},
"uniqueItems": true,
"type": "array"
},
"neo-masters": {
"$ref": "../neoppod/instance-neo-output-schema.json#/properties/masters"
},
"neo-admins": {
"$ref": "../neoppod/instance-neo-output-schema.json#/properties/admins"
},
"jupyter-url": {
"description": "Jupyter notebook web UI access information",
"pattern": "^https://",
"type": "string"
},
"caucase-http-url": {
"description": "Caucase url on HTTP. For HTTPS URL, uses https scheme, if port is explicitely specified in http URL, take that port and add 1 and use it as https port. If it is not specified.",
"pattern": "^http://",
"type": "string"
}
},
"patternProperties": {
"family-.*": {
"description": "Zope family access information",
"pattern": "^https://",
"type": "string"
}
},
"type": "object"
}
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"additionalProperties": false,
"properties": {
"tcpv4-port": {
"allOf": [
{
"$ref": "./schemas-definitions.json#/tcpv4port"
},
{
"description": "Start allocating ports at this value, going upward"
}
]
},
"ram-storage-size": {
"description": "If 0 use disk storage, otherwise use ram and limit data size to this many megabytes",
"default": 0,
"type": "integer"
}
}
}
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"additionalProperties": false,
"properties": {
"tcpv4-port": {
"allOf": [
{
"$ref": "./schemas-definitions.json#/tcpv4port"
},
{
"description": "Start allocating ports at this value, going downward"
}
]
},
"database-list": {
"description": "Databases to create and respective user credentials getting all privileges on it",
"default": [
{
"name": "erp5",
"user": "user",
"password": "insecure"
}
],
"minItems": 1,
"items": {
"required": [
"name",
"user",
"password"
],
"properties": {
"name": {
"description": "Database name",
"type": "string"
},
"user": {
"description": "User name",
"type": "string"
},
"password": {
"description": "User password",
"type": "string"
}
},
"type": "object"
},
"type": "array"
},
"catalog-backup": {
"description": "Backup control knobs",
"properties": {
"full-retention-days": {
"description": "How many days full backups must be retained, -1 meaning full backups are disabled and 0 meaning no expiration",
"default": 7,
"minimum": -1,
"type": "integer"
},
"incremental-retention-days": {
"description": "How many days incremental backups (binlogs) must be retained, -1 meaning incremental backups are disabled and 0 meaning no expiration, defaulting to full-retention-days' value",
"minimum": -1,
"type": "integer"
}
},
"type": "object"
},
"backup-periodicity": {
"description": "When to backup, specified in the same format as for systemd.time(7) calendar events (years & seconds not supported, DoW & DoM can not be combined).",
"default": "daily",
"type": "string"
},
"innodb-buffer-pool-size": {
"description": "See MariaDB documentation on innodb_buffer_pool_size",
"minimum": 0,
"type": "integer"
},
"innodb-buffer-pool-instances": {
"description": "See MariaDB documentation on innodb_buffer_pool_instances",
"minimum": 1,
"type": "integer"
},
"innodb-log-file-size": {
"description": "See MariaDB documentation on innodb_log_file_size",
"minimum": 0,
"type": "integer"
},
"innodb-log-buffer-size": {
"description": "See MariaDB documentation on innodb_log_buffer_size",
"minimum": 0,
"type": "integer"
},
"innodb-file-per-table": {
"description": "See MariaDB documentation on innodb_file_per_table",
"minimum": 0,
"maximum": 1,
"default": 0,
"type": "integer"
},
"long-query-time": {
"description": "Number of seconds above which long queries are logged",
"minimum": 0,
"default": 1,
"type": "number"
},
"max-connection-count": {
"description": "See MariaDB documentation on max_connections. If not provided, a value suitable for the number of request Zope processes is chosen.",
"minimum": 0,
"type": "integer"
},
"relaxed-writes": {
"description": "When enabled, sets innodb_flush_log_at_trx_commit = 0, innodb_flush_method = nosync, innodb_doublewrite = 0 and sync_frm = 0 - RTFM, those options are dangerous",
"default": false,
"type": "boolean"
},
"character-set-server": {
"description": "The server default character set",
"default": "utf8mb4",
"type": "string"
},
"collation-server": {
"description": "The server default collation",
"default": "utf8mb4_general_ci",
"type": "string"
},
"ssl": {
"description": "Enable and define SSL support for network connections",
"default": {},
"properties": {
"ca-crt": {
"description": "Certificate Authority's certificate, in PEM format",
"type": "string"
},
"crt": {
"description": "Server's certificate, in PEM format (mandatory to enable SSL support)",
"type": "string"
},
"key": {
"description": "Server's key, in PEM format (mandatory to enable SSL support)",
"type": "string"
},
"crl": {
"description": "Server's certificate revocation list, in PEM format",
"type": "string"
},
"cipher": {
"description": "Permissible cipher specifications, separated by colons",
"type": "string"
}
},
"type": "object"
},
"odbc-ini": {
"description": "Contents of odbc.ini file, see unixodbc document",
"default": "",
"type": "string"
},
"environment-variables": {
"description": "Extra environment variables for mysqld may be required to use third party ODBC libraries for CONNECT storage engine.",
"items": {
"type": "string"
},
"type": "array"
}
}
}
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"additionalProperties": false,
"properties": {
"tcpv4-port": {
"allOf": [
{
"$ref": "./schemas-definitions.json#/tcpv4port"
},
{
"description": "Start allocating ports at this value, going upward"
}
]
},
"postmaster": {
"description": "Mail address to send technical mails to. Non-empty value required for smptd relay service to be deployed. Values will be put in alias-dict as 'postmaster' key (alias-dict takes precedence)",
"default": "",
"type": "string"
},
"alias-dict": {
"description": "Mail alias support",
"default": {},
"patternProperties": {
".*": {
"description": "List of addresses alias expands to",
"type": "array"
}
},
"type": "object"
},
"relay": {
"description": "Forward outgoing mails to a specific relay. If enabled, relay must support TLS-encrypted SASL authentication.",
"dependencies": {
"host": [
"sasl-credential"
]
},
"properties": {
"host": {
"description": "Host name or address of relay, with optional port (ex: '[example.com]:submission'). Enclosing hostname with [] prevents MX lookup.",
"type": "string"
},
"sasl-credential": {
"description": "SASL credential, in the login:password form",
"type": "string"
}
},
"default": {},
"type": "object"
},
"divert": {
"description": "Intercept all mails and send them to given addresses instead of original recipient",
"type": "array",
"items": {
"type": "string"
},
"uniqueItems": true
}
}
}
{
"$schema": "http://json-schema.org/draft-04/schema#",
"additionalProperties": false,
"properties": {
"backup": {
"description": "'%(backup)s' is expanded to partition's ZODB backup path (typically 'srv/backup/zodb'), and %(name)s with the export id",
"default": "%(backup)s/%(name)s",
"type": "string"
},
"family": {
"description": "Opaque name used to regroup/separate mountpoints under different ZEO processes (must be valid as a file name and as a ConfigParser section name)",
"default": "default",
"pattern": "^[^<>:\"/\\|?*\\]\\[ ]*$",
"type": "string"
},
"path": {
"description": "FileStorage file path, '%(zodb)s' occurrences are replaced with the path to partition's srv/zodb directory, and %(name)s with the export id",
"default": "%(zodb)s/%(name)s.fs",
"type": "string"
}
},
"type": "object"
}
{
"$schema": "http://json-schema.org/draft-07/schema#",
"tcpv4port": {
"minimum": 0,
"maximum": 65535,
"type": "integer"
}
}
[buildout]
extends = software.cfg
shared-parts = /opt/slapgrid/shared-parts
eggs-directory = /opt/slapgrid/shared-eggs
abi-tag-eggs = true
[buildout]
extends =
../../stack/erp5-zope2/buildout.cfg
{
"name": "ERP5",
"description": "ERP5, Open-Source ERP",
"serialisation": "json-in-xml",
"software-type": {
"default": {
"title": "Default",
"software-type": "default",
"request": "instance-erp5-input-schema.json",
"response": "instance-erp5-output-schema.json",
"index": 0
}
}
}
Tests for ERP5 software release
##############################################################################
#
# Copyright (c) 2018 Nexedi SA and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
from setuptools import setup, find_packages
version = '0.0.1.dev0'
name = 'slapos.test.erp5'
with open("README.md") as f:
long_description = f.read()
setup(name=name,
version=version,
description="Test for SlapOS' ERP5 software release",
long_description=long_description,
long_description_content_type='text/markdown',
maintainer="Nexedi",
maintainer_email="info@nexedi.com",
url="https://lab.nexedi.com/nexedi/slapos",
packages=find_packages(),
install_requires=[
'slapos.core',
'supervisor',
'slapos.libnetworkcache',
'erp5.util',
'psutil',
'requests',
'mysqlclient',
'cryptography',
'pexpect',
'pyOpenSSL',
],
test_suite='test',
)
##############################################################################
#
# Copyright (c) 2022 Nexedi SA and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
import itertools
import json
import os
import sys
from slapos.testing.testcase import makeModuleSetUpAndTestCaseClass
_setUpModule, SlapOSInstanceTestCase = makeModuleSetUpAndTestCaseClass(
os.path.abspath(
os.path.join(os.path.dirname(__file__), '..', '..', 'software.cfg')))
setup_module_executed = False
def setUpModule():
# slapos.testing.testcase's only need to be executed once
global setup_module_executed
if not setup_module_executed:
_setUpModule()
setup_module_executed = True
# Metaclass to parameterize our tests.
# This is a rough adaption of the parameterized package:
# https://github.com/wolever/parameterized
# Consult following note for rationale why we don't use parameterized:
# https://lab.nexedi.com/nexedi/slapos/merge_requests/1306
class ERP5InstanceTestMeta(type):
"""Adjust ERP5InstanceTestCase instances to be run in several flavours (e.g. NEO/ZEO)
Adjustments can be declared via setting the '__test_matrix__' attribute
of a test case.
A test matrix is a dict which maps the flavoured class name suffix to
a tuple of parameters.
A parameter is a function which receives the instance_parameter_dict
and modifies it in place (therefore no return value is needed).
You can use the 'matrix' helper function to construct a test matrix.
If .__test_matrix__ is 'None' the test case is ignored.
If the test case should be run without any adaptions, you can set
.__test_matrix__ to 'matrix((default,))'.
"""
def __new__(cls, name, bases, attrs):
base_class = super().__new__(cls, name, bases, attrs)
if base_class._isParameterized():
cls._parameterize(base_class)
return base_class
# _isParameterized tells whether class is parameterized.
# All classes with 'metaclass=ERP5InstanceTestMeta' are parameterized
# except from a class which has been automatically instantiated from
# such user class. This exception prevents infinite recursion due to
# a parameterized class which tries to parameterize itself again.
def _isParameterized(self):
return not getattr(self, '.created_by_parametrize', False)
# Create multiple test classes from single definition.
@classmethod
def _parameterize(cls, base_class):
mod_dict = sys.modules[base_class.__module__].__dict__
for class_name_suffix, parameter_tuple in (base_class.__test_matrix__ or {}).items():
parameterized_cls_dict = dict(
base_class.__dict__,
**{
# Avoid infinite loop by a parameterized class which
# parameterize itself again and again and..
".created_by_parametrize": True,
# Switch
#
# .getInstanceParameterDict to ._test_getInstanceParameterDict
# ._base_getInstanceParameterDict to .getInstanceParameterDict
#
# so that we could inject base implementation to be called above
# user-defined getInstanceParameterDict.
"_test_getInstanceParameterDict": base_class.getInstanceParameterDict,
"getInstanceParameterDict": cls._getParameterizedInstanceParameterDict(parameter_tuple)
}
)
name = f"{base_class.__name__}_{class_name_suffix}"
mod_dict[name] = type(name, (base_class,), parameterized_cls_dict)
# _getParameterizedInstanceParameterDict returns a modified version of
# a test cases original 'getInstanceParameterDict'. The modified version
# applies parameters on the default instance parameters.
@staticmethod
def _getParameterizedInstanceParameterDict(parameter_tuple):
@classmethod
def getInstanceParameterDict(cls):
instance_parameter_dict = json.loads(
cls._test_getInstanceParameterDict().get("_", r"{}")
)
[p(instance_parameter_dict) for p in parameter_tuple]
return {"_": json.dumps(instance_parameter_dict)}
return getInstanceParameterDict
# Hide tests in unpatched base class: It doesn't make sense to run tests
# in original class, because parameters have not been assigned yet.
#
# We can't simply call 'delattr', because this wouldn't remove
# inherited tests. Overriding dir is sufficient, because this is
# the way how unittest discovers tests:
# https://github.com/python/cpython/blob/3.11/Lib/unittest/loader.py#L237
def __dir__(self):
if self._isParameterized():
return [attr for attr in super().__dir__() if not attr.startswith('test')]
return super().__dir__()
def matrix(*parameter_tuple):
"""matrix creates a mapping of test_name -> parameter_tuple.
Each provided parameter_tuple won't be combined within itself,
but with any other provided parameter_tuple, for instance
>>> parameter_tuple0 = (param0, param1)
>>> parameter_tuple1 = (param2, param3)
>>> matrix(parameter_tuple0, parameter_tuple1)
will return all options of (param0 | param1) & (param2 | param3):
- param0_param2
- param0_param3
- param1_param2
- param1_param3
"""
return {
"_".join([p.__name__ for p in params]): params
for params in itertools.product(*parameter_tuple)
}
# Define parameters (function which receives instance params + modifies them).
#
# default runs tests without any adaption
def default(instance_parameter_dict): ...
def zeo(instance_parameter_dict):
instance_parameter_dict['zodb'] = [{"type": "zeo", "server": {}}]
def neo(instance_parameter_dict):
# We don't provide encryption certificates in test runs for the sake
# of simplicity. By default SSL is turned on, we need to explicitly
# deactivate it:
# https://lab.nexedi.com/nexedi/slapos/blob/a8150a1ac/software/neoppod/instance-neo-input-schema.json#L61-65
instance_parameter_dict['zodb'] = [{"type": "neo", "server": {"ssl": False}}]
class ERP5InstanceTestCase(SlapOSInstanceTestCase, metaclass=ERP5InstanceTestMeta):
"""ERP5 base test case
"""
__test_matrix__ = matrix((zeo, neo)) # switch between NEO and ZEO mode
@classmethod
def getRootPartitionConnectionParameterDict(cls):
"""Return the output parameters from the root partition"""
return json.loads(
cls.computer_partition.getConnectionParameterDict()['_'])
@classmethod
def getComputerPartition(cls, partition_reference):
for computer_partition in cls.slap.computer.getComputerPartitionList():
if partition_reference == computer_partition.getInstanceParameter(
'instance_title'):
return computer_partition
@classmethod
def getComputerPartitionPath(cls, partition_reference):
partition_id = cls.getComputerPartition(partition_reference).getId()
return os.path.join(cls.slap._instance_root, partition_id)
##############################################################################
#
# Copyright (c) 2022 Nexedi SA and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
import datetime
import json
import pathlib
import subprocess
import time
import typing
import urllib.parse
import psutil
import requests
from . import ERP5InstanceTestCase, default, matrix, setUpModule
from .test_erp5 import ZopeSkinsMixin
class TestOrderBuildPackingListSimulation(
ZopeSkinsMixin,
ERP5InstanceTestCase,
):
"""Create orders and build packing lists.
"""
__partition_reference__ = 's'
__test_matrix__ = matrix((default, ))
_start: datetime.datetime
_previous: datetime.datetime
@classmethod
def getInstanceParameterDict(cls) -> dict:
return {
'_':
json.dumps(
{
"bt5":
" ".join(
[
"erp5_full_text_mroonga_catalog",
"erp5_configurator_standard",
"erp5_scalability_test",
]),
"mariadb": {
# We use a large innodb-buffer-pool-size because the simulation
# select method used for sale packing list does not use index and
# cause slpow queries
"innodb-buffer-pool-size": 32 * 1024 * 1024 * 1024, # 32Go
},
"zope-partition-dict": {
"activities": {
"instance-count": 32,
"family": "activities",
"thread-amount": 2,
"port-base": 2300
},
"default": {
"instance-count": 1,
"family": "default",
"port-base": 2200
},
},
})
}
@classmethod
def _setUpClass(cls) -> None:
super()._setUpClass()
cls.zope_base_url = cls._getAuthenticatedZopeUrl('')
cls.create_sale_order_batch_url = urllib.parse.urljoin(
cls.zope_base_url, 'ERP5Site_createScalabilityTestSaleOrderBatch')
def setUp(self) -> None:
super().setUp()
self.measurement_file = open(f'measures{self.id()}.jsonl', 'w')
self.addCleanup(self.measurement_file.close)
def write_measurement(
self, measurement: dict[str, typing.Union[str, float]]) -> None:
json.dump(
measurement,
self.measurement_file,
)
self.measurement_file.write('\n')
self.measurement_file.flush()
def take_measurements(self, step: str) -> None:
# Time for this iteration
now = datetime.datetime.now()
elapsed = now - self._previous
self._previous = now
# Memory usage of all zopes
with self.slap.instance_supervisor_rpc as supervisor:
zope_memory_info_list = [
psutil.Process(process['pid']).memory_info()
for process in supervisor.getAllProcessInfo()
if process['name'].startswith('zope-') and process['pid']
]
zope_total_rss = sum(mem.rss for mem in zope_memory_info_list)
zope_count = len(zope_memory_info_list)
# Database size
root_fs = pathlib.Path(
self.getComputerPartitionPath('zodb')) / 'srv' / 'zodb' / 'root.fs'
root_fs_size = root_fs.stat().st_size
self.logger.info(
"Measurements for %s (after %s): "
"elapsed=%s zope_total_rss=%s / %s root_fs_size=%s",
step,
now - self._start,
elapsed,
zope_total_rss,
zope_count,
root_fs_size,
)
self.write_measurement(
{
'step': step,
'step_duration_seconds': elapsed.total_seconds(),
'step_duration': str(elapsed),
'zope_total_rss': zope_total_rss,
'zope_count': zope_count,
'root_fs_size': root_fs_size,
'now': str(now),
})
def test(self) -> None:
self._start = self._previous = datetime.datetime.now()
with requests.Session() as session:
ret = session.get(
urllib.parse.urljoin(
self.zope_base_url, 'ERP5Site_bootstrapScalabilityTest'),
verify=False,
params={'user_quantity:int': 1})
if not ret.ok:
self.logger.error(ret.text)
ret.raise_for_status()
self._waitForActivities(
timeout=datetime.timedelta(hours=2).total_seconds())
# XXX default reference generator for sale packing list cause
# many conflict errors, disable it.
self._addPythonScript(
script_id='Delivery_generateReference',
params='*args, **kw',
body='context.setReference("no reference for benchmark")',
)
self.take_measurements("setup")
# XXX now that we have installed business templates,
# restart all zopes to workaround a bug with accessors not
# working after some time (packing_list_line.getStartDate no longer
# acquire from parent's sale packing list)
with self.slap.instance_supervisor_rpc as supervisor:
supervisor.stopAllProcesses()
supervisor.startAllProcesses()
self.slap.waitForInstance()
self.take_measurements("restart")
with requests.Session() as session:
for i in range(100):
for j in range(5):
ret = session.get(
self.create_sale_order_batch_url,
verify=False,
params={
'random_seed': f'{i}.{j}',
'order_count:int': '50',
},
)
if not ret.ok:
self.logger.error(ret.text)
ret.raise_for_status()
self._waitForActivities(
timeout=datetime.timedelta(hours=2).total_seconds())
self.take_measurements(f"iteration_{i+1:03}")
# final measurements, take a "zodb analyze" snapshot
zodb_cmd = pathlib.Path(
self.computer_partition_root_path
) / 'software_release' / 'bin' / 'zodb'
root_fs = pathlib.Path(
self.getComputerPartitionPath('zodb')) / 'srv' / 'zodb' / 'root.fs'
self.write_measurement(
{
'zodb analyze':
subprocess.check_output((zodb_cmd, 'analyze', root_fs), text=True)
})
# and a pt-query-digest for slow log
pt_query_digest = pathlib.Path(
self.computer_partition_root_path
) / 'software_release' / 'parts' / 'percona-toolkit' / 'bin' / 'pt-query-digest'
mariadb_slowquery_log = pathlib.Path(
self.getComputerPartitionPath(
'mariadb')) / 'var' / 'log' / 'mariadb_slowquery.log'
self.write_measurement(
{
'pt-query-digest':
subprocess.check_output(
(pt_query_digest, mariadb_slowquery_log), text=True)
})
breakpoint()
import glob
import hashlib
import json
import logging
import os
import re
import shutil
import subprocess
import tempfile
import time
import urllib.parse
from http.server import BaseHTTPRequestHandler
from unittest import mock
import OpenSSL.SSL
import pexpect
import psutil
import requests
from cryptography import x509
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes, serialization
from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.x509.oid import NameOID
from slapos.testing.testcase import ManagedResource
from slapos.testing.utils import (CrontabMixin, ManagedHTTPServer,
findFreeTCPPort)
from . import ERP5InstanceTestCase, setUpModule, matrix, default
setUpModule # pyflakes
class EchoHTTPServer(ManagedHTTPServer):
"""An HTTP Server responding with the request path and incoming headers,
encoded in json.
"""
class RequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
# type: () -> None
self.send_response(200)
self.send_header("Content-Type", "application/json")
response = json.dumps(
{
'Path': self.path,
'Incoming Headers': dict(self.headers.items()),
},
indent=2,
).encode('utf-8')
self.end_headers()
self.wfile.write(response)
log_message = logging.getLogger(__name__ + '.EchoHTTPServer').info
class EchoHTTP11Server(ManagedHTTPServer):
"""An HTTP/1.1 Server responding with the request path and incoming headers,
encoded in json.
"""
class RequestHandler(BaseHTTPRequestHandler):
protocol_version = 'HTTP/1.1'
def do_GET(self):
# type: () -> None
self.send_response(200)
self.send_header("Content-Type", "application/json")
response = json.dumps(
{
'Path': self.path,
'Incoming Headers': dict(self.headers.items()),
},
indent=2,
).encode('utf-8')
self.send_header("Content-Length", str(len(response)))
self.end_headers()
self.wfile.write(response)
log_message = logging.getLogger(__name__ + '.EchoHTTP11Server').info
class CaucaseService(ManagedResource):
"""A caucase service.
"""
url = None # type: str
directory = None # type: str
_caucased_process = None # type: subprocess.Popen
def open(self):
# type: () -> None
# start a caucased and server certificate.
software_release_root_path = os.path.join(
self._cls.slap._software_root,
hashlib.md5(self._cls.getSoftwareURL().encode()).hexdigest(),
)
caucased_path = os.path.join(software_release_root_path, 'bin', 'caucased')
self.directory = tempfile.mkdtemp()
caucased_dir = os.path.join(self.directory, 'caucased')
os.mkdir(caucased_dir)
os.mkdir(os.path.join(caucased_dir, 'user'))
os.mkdir(os.path.join(caucased_dir, 'service'))
backend_caucased_netloc = f'{self._cls._ipv4_address}:{findFreeTCPPort(self._cls._ipv4_address)}'
self.url = 'http://' + backend_caucased_netloc
self._caucased_process = subprocess.Popen(
[
caucased_path,
'--db', os.path.join(caucased_dir, 'caucase.sqlite'),
'--server-key', os.path.join(caucased_dir, 'server.key.pem'),
'--netloc', backend_caucased_netloc,
'--service-auto-approve-count', '1',
],
# capture subprocess output not to pollute test's own stdout
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
)
for _ in range(30):
try:
if requests.get(self.url).status_code == 200:
break
except Exception:
pass
time.sleep(1)
else:
raise RuntimeError('caucased failed to start.')
def close(self):
# type: () -> None
self._caucased_process.terminate()
self._caucased_process.wait()
self._caucased_process.stdout.close()
shutil.rmtree(self.directory)
class BalancerTestCase(ERP5InstanceTestCase):
# We explicitly specify 'balancer' as our software type here,
# therefore we don't request ZODB. We therefore don't
# need to run these tests with both NEO and ZEO mode,
# it wouldn't make any difference.
# https://lab.nexedi.com/nexedi/slapos/blob/273037c8/stack/erp5/instance.cfg.in#L216-230
__test_matrix__ = matrix((default,))
@classmethod
def getInstanceSoftwareType(cls):
return 'balancer'
@classmethod
def _getInstanceParameterDict(cls):
# type: () -> dict
return {
'tcpv4-port': 8000,
'computer-memory-percent-threshold': 100,
# XXX what is this ? should probably not be needed here
'name': cls.__name__,
'monitor-passwd': 'secret',
'apachedex-configuration': [
'--logformat', '%h %l %u %t "%r" %>s %O "%{Referer}i" "%{User-Agent}i" %{ms}T',
'--erp5-base', '+erp5', '.*/VirtualHostRoot/erp5(/|\\?|$)',
'--base', '+other', '/',
'--skip-user-agent', 'Zabbix',
'--error-detail',
'--js-embed',
'--quiet',
],
'apachedex-promise-threshold': 100,
'haproxy-server-check-path': '/',
'zope-family-dict': {
'default': ['dummy_http_server'],
},
'dummy_http_server': [[cls.getManagedResource("backend_web_server", EchoHTTPServer).netloc, 1, False]],
'backend-path-dict': {
'default': '',
},
'ssl-authentication-dict': {'default': False},
'ssl': {
'caucase-url': cls.getManagedResource("caucase", CaucaseService).url,
},
'timeout-dict': {'default': None},
'family-path-routing-dict': {},
'path-routing-list': [],
}
@classmethod
def getInstanceParameterDict(cls):
# type: () -> dict
return {'_': json.dumps(cls._getInstanceParameterDict())}
def setUp(self):
# type: () -> None
self.default_balancer_url = json.loads(
self.computer_partition.getConnectionParameterDict()['_'])['default']
class SlowHTTPServer(ManagedHTTPServer):
"""An HTTP Server which reply after a timeout.
Timeout is 2 seconds by default, and can be specified in the path of the URL
"""
class RequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
# type: () -> None
self.send_response(200)
self.send_header("Content-Type", "text/plain")
timeout = 2
try:
timeout = int(self.path[1:])
except ValueError:
pass
time.sleep(timeout)
self.end_headers()
self.wfile.write(b"OK\n")
log_message = logging.getLogger(__name__ + '.SlowHTTPServer').info
class TestTimeout(BalancerTestCase, CrontabMixin):
__partition_reference__ = 't'
@classmethod
def _getInstanceParameterDict(cls):
# type: () -> dict
parameter_dict = super()._getInstanceParameterDict()
# use a slow server instead
parameter_dict['dummy_http_server'] = [[cls.getManagedResource("slow_web_server", SlowHTTPServer).netloc, 1, False]]
# and set timeout of 1 second
parameter_dict['timeout-dict'] = {'default': 1}
return parameter_dict
def test_timeout(self):
# type: () -> None
self.assertEqual(
requests.get(
urllib.parse.urljoin(self.default_balancer_url, '/1'),
verify=False).status_code,
requests.codes.ok)
self.assertEqual(
requests.get(
urllib.parse.urljoin(self.default_balancer_url, '/5'),
verify=False).status_code,
requests.codes.gateway_timeout)
class TestLog(BalancerTestCase, CrontabMixin):
"""Check logs emitted by balancer
"""
__partition_reference__ = 'l'
@classmethod
def _getInstanceParameterDict(cls):
# type: () -> dict
parameter_dict = super()._getInstanceParameterDict()
# use a slow server instead
parameter_dict['dummy_http_server'] = [[cls.getManagedResource("slow_web_server", SlowHTTPServer).netloc, 1, False]]
return parameter_dict
def test_access_log_format(self):
# type: () -> None
requests.get(
urllib.parse.urljoin(self.default_balancer_url, '/url_path'),
verify=False,
)
time.sleep(.5) # wait a bit more until access is logged
with open(os.path.join(self.computer_partition_root_path, 'var', 'log', 'apache-access.log')) as access_log_file:
access_line = access_log_file.read().splitlines()[-1]
self.assertIn('/url_path', access_line)
# last \d is the request time in milli seconds, since this SlowHTTPServer
# sleeps for 2 seconds, it should take between 2 and 3 seconds to process
# the request - but our test machines can be slow sometimes, so we tolerate
# it can take up to 20 seconds.
match = re.match(
r'([(\d\.)]+) - - \[(.*?)\] "(.*?)" (\d+) (\d+) "(.*?)" "(.*?)" (\d+)',
access_line
)
self.assertTrue(match)
assert match
request_time = int(match.groups()[-1])
self.assertGreater(request_time, 2 * 1000)
self.assertLess(request_time, 20 * 1000)
def test_access_log_apachedex_report(self):
# type: () -> None
# make a request so that we have something in the logs
requests.get(self.default_balancer_url, verify=False)
# crontab for apachedex is executed
self._executeCrontabAtDate('generate-apachedex-report', '23:59')
# it creates a report for the day
apachedex_report, = glob.glob(
os.path.join(
self.computer_partition_root_path,
'srv',
'monitor',
'private',
'apachedex',
'ApacheDex-*.html',
))
with open(apachedex_report) as f:
report_text = f.read()
self.assertIn('APacheDEX', report_text)
# having this table means that apachedex could parse some lines.
self.assertIn('<h2>Hits per status code</h2>', report_text)
def test_access_log_rotation(self):
# type: () -> None
# run logrotate a first time so that it create state files
self._executeCrontabAtDate('logrotate', '2000-01-01')
# make a request so that we have something in the logs
requests.get(self.default_balancer_url, verify=False).raise_for_status()
# slow query crontab depends on crontab for log rotation
# to be executed first.
self._executeCrontabAtDate('logrotate', '2050-01-01')
# this logrotate leaves the log for the day as non compressed
rotated_log_file = os.path.join(
self.computer_partition_root_path,
'srv',
'backup',
'logrotate',
'apache-access.log-20500101',
)
self.assertTrue(os.path.exists(rotated_log_file))
requests.get(self.default_balancer_url, verify=False).raise_for_status()
# on next day execution of logrotate, log files are compressed
self._executeCrontabAtDate('logrotate', '2050-01-02')
self.assertTrue(os.path.exists(rotated_log_file + '.xz'))
self.assertFalse(os.path.exists(rotated_log_file))
def test_error_log(self):
# type: () -> None
# stop backend server
backend_server = self.getManagedResource("slow_web_server", SlowHTTPServer)
self.addCleanup(backend_server.open)
backend_server.close()
# after a while, balancer should detect and log this event in error log
time.sleep(5)
self.assertEqual(
requests.get(self.default_balancer_url, verify=False).status_code,
requests.codes.service_unavailable)
with open(os.path.join(self.computer_partition_root_path, 'var', 'log', 'apache-error.log')) as error_log_file:
error_line = error_log_file.read().splitlines()[-1]
self.assertIn('proxy family_default has no server available!', error_line)
# this log also include a timestamp
self.assertRegex(error_line, r'\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}')
class BalancerCookieHTTPServer(ManagedHTTPServer):
"""An HTTP Server which can set balancer cookie.
This server set cookie when requested /set-cookie path.
The reply body is the name used when registering this resource
using getManagedResource. This way we can assert which
backend replied.
"""
@property
def RequestHandler(self):
server = self
class RequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
# type: () -> None
self.send_response(200)
self.send_header("Content-Type", "text/plain")
if self.path == '/set_cookie':
# the balancer tells the backend what's the name of the balancer cookie with
# the X-Balancer-Current-Cookie header.
self.send_header('Set-Cookie', '%s=anything' % self.headers['X-Balancer-Current-Cookie'])
# The name of this cookie is SERVERID
assert self.headers['X-Balancer-Current-Cookie'] == 'SERVERID'
self.end_headers()
self.wfile.write(server._name.encode('utf-8'))
log_message = logging.getLogger(__name__ + '.BalancerCookieHTTPServer').info
return RequestHandler
class TestBalancer(BalancerTestCase):
"""Check balancing capabilities
"""
__partition_reference__ = 'b'
@classmethod
def _getInstanceParameterDict(cls):
# type: () -> dict
parameter_dict = super()._getInstanceParameterDict()
# use two backend servers
parameter_dict['dummy_http_server'] = [
[cls.getManagedResource("backend_web_server1", BalancerCookieHTTPServer).netloc, 1, False],
[cls.getManagedResource("backend_web_server2", BalancerCookieHTTPServer).netloc, 1, False],
]
return parameter_dict
def test_balancer_round_robin(self):
# type: () -> None
# requests are by default balanced to both servers
self.assertEqual(
{requests.get(self.default_balancer_url, verify=False).text for _ in range(10)},
{'backend_web_server1', 'backend_web_server2'}
)
def test_balancer_server_down(self):
# type: () -> None
# if one backend is down, it is excluded from balancer
self.getManagedResource("backend_web_server2", BalancerCookieHTTPServer).close()
self.addCleanup(self.getManagedResource("backend_web_server2", BalancerCookieHTTPServer).open)
self.assertEqual(
{requests.get(self.default_balancer_url, verify=False).text for _ in range(10)},
{'backend_web_server1',}
)
def test_balancer_set_cookie(self):
# type: () -> None
# if backend provides a "SERVERID" cookie, balancer will overwrite it with the
# backend selected by balancing algorithm
self.assertIn(
requests.get(urllib.parse.urljoin(self.default_balancer_url, '/set_cookie'), verify=False).cookies['SERVERID'],
('default-0', 'default-1'),
)
def test_balancer_respects_sticky_cookie(self):
# type: () -> None
# if request is made with the sticky cookie, the client stick on one balancer
cookies = dict(SERVERID='default-1')
self.assertEqual(
{requests.get(self.default_balancer_url, verify=False, cookies=cookies).text for _ in range(10)},
{'backend_web_server2',}
)
# if that backend becomes down, requests are balanced to another server
self.getManagedResource("backend_web_server2", BalancerCookieHTTPServer).close()
self.addCleanup(self.getManagedResource("backend_web_server2", BalancerCookieHTTPServer).open)
self.assertEqual(
requests.get(self.default_balancer_url, verify=False, cookies=cookies).text,
'backend_web_server1')
def test_balancer_stats_socket(self):
# type: () -> None
# real time statistics can be obtained by using the stats socket and there
# is a wrapper which makes this a bit easier.
socat_process = subprocess.Popen(
[self.computer_partition_root_path + '/bin/haproxy-socat-stats'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT
)
try:
output, _ = socat_process.communicate(b"show stat\n")
except:
socat_process.kill()
socat_process.wait()
raise
self.assertEqual(socat_process.poll(), 0)
# output is a csv
self.assertIn(b'family_default,FRONTEND,', output)
class TestTestRunnerEntryPoints(BalancerTestCase):
"""Check balancer has some entries for test runner.
"""
__partition_reference__ = 't'
@classmethod
def _getInstanceParameterDict(cls):
# type: () -> dict
parameter_dict = super()._getInstanceParameterDict()
parameter_dict['dummy_http_server-test-runner-address-list'] = [
[
cls.getManagedResource("backend_0", EchoHTTPServer).hostname,
cls.getManagedResource("backend_0", EchoHTTPServer).port,
],
[
cls.getManagedResource("backend_1", EchoHTTPServer).hostname,
cls.getManagedResource("backend_1", EchoHTTPServer).port,
],
[
cls.getManagedResource("backend_2", EchoHTTPServer).hostname,
cls.getManagedResource("backend_2", EchoHTTPServer).port,
],
]
return parameter_dict
def test_use_proper_backend(self):
# type: () -> None
# requests are directed to proper backend based on URL path
test_runner_url_list = self.getRootPartitionConnectionParameterDict(
)['default-test-runner-url-list']
url_0, url_1, url_2 = test_runner_url_list
self.assertEqual(
urllib.parse.urlparse(url_0).netloc,
urllib.parse.urlparse(url_1).netloc)
self.assertEqual(
urllib.parse.urlparse(url_0).netloc,
urllib.parse.urlparse(url_2).netloc)
path_0 = '/VirtualHostBase/https/{netloc}/VirtualHostRoot/_vh_unit_test_0/something'.format(
netloc=urllib.parse.urlparse(url_0).netloc)
path_1 = '/VirtualHostBase/https/{netloc}/VirtualHostRoot/_vh_unit_test_1/something'.format(
netloc=urllib.parse.urlparse(url_0).netloc)
path_2 = '/VirtualHostBase/https/{netloc}/VirtualHostRoot/_vh_unit_test_2/something'.format(
netloc=urllib.parse.urlparse(url_0).netloc)
self.assertEqual(
{
requests.get(url_0 + 'something', verify=False).json()['Path']
for _ in range(10)
}, {path_0})
self.assertEqual(
{
requests.get(url_1 + 'something', verify=False).json()['Path']
for _ in range(10)
}, {path_1})
self.assertEqual(
{
requests.get(url_2 + 'something', verify=False).json()['Path']
for _ in range(10)
}, {path_2})
# If a test runner backend is down, others can be accessed.
self.getManagedResource("backend_0", EchoHTTPServer).close()
self.assertEqual(
{
requests.get(url_0 + 'something', verify=False).status_code
for _ in range(5)
}, {503})
self.assertEqual(
{
requests.get(url_1 + 'something', verify=False).json()['Path']
for _ in range(10)
}, {path_1})
class TestHTTP(BalancerTestCase):
"""Check HTTP protocol with a HTTP/1.1 backend
"""
@classmethod
def _getInstanceParameterDict(cls):
# type: () -> dict
parameter_dict = super()._getInstanceParameterDict()
# use a HTTP/1.1 server instead
parameter_dict['dummy_http_server'] = [[cls.getManagedResource("HTTP/1.1 Server", EchoHTTP11Server).netloc, 1, False]]
return parameter_dict
__partition_reference__ = 'h'
def test_http_version(self):
# type: () -> None
self.assertEqual(
subprocess.check_output([
'curl',
'--silent',
'--show-error',
'--output',
'/dev/null',
'--insecure',
'--write-out',
'%{http_version}',
self.default_balancer_url,
]),
b'2',
)
def test_keep_alive(self):
# type: () -> None
# when doing two requests, connection is established only once
with requests.Session() as session:
session.verify = False
# do a first request, which establish a first connection
session.get(self.default_balancer_url).raise_for_status()
# "break" new connection method and check we can make another request
with mock.patch(
"requests.packages.urllib3.connectionpool.HTTPSConnectionPool._new_conn",
) as new_conn:
session.get(self.default_balancer_url).raise_for_status()
new_conn.assert_not_called()
parsed_url = urllib.parse.urlparse(self.default_balancer_url)
# check that we have an open file for the ip connection
self.assertTrue([
c for c in psutil.Process(os.getpid()).connections()
if c.status == 'ESTABLISHED' and c.raddr.ip == parsed_url.hostname
and c.raddr.port == parsed_url.port
])
class ContentTypeHTTPServer(ManagedHTTPServer):
"""An HTTP/1.1 Server which reply with content type from path.
For example when requested http://host/text/plain it will reply
with Content-Type: text/plain header.
The body is always "OK"
"""
class RequestHandler(BaseHTTPRequestHandler):
protocol_version = 'HTTP/1.1'
def do_GET(self):
# type: () -> None
self.send_response(200)
if self.path == '/':
self.send_header("Content-Length", '0')
return self.end_headers()
content_type = self.path[1:]
body = b"OK"
self.send_header("Content-Type", content_type)
self.send_header("Content-Length", str(len(body)))
self.end_headers()
self.wfile.write(body)
log_message = logging.getLogger(__name__ + '.ContentTypeHTTPServer').info
class TestContentEncoding(BalancerTestCase):
"""Test how responses are gzip encoded or not depending on content type header.
"""
__partition_reference__ = 'ce'
@classmethod
def _getInstanceParameterDict(cls):
# type: () -> dict
parameter_dict = super()._getInstanceParameterDict()
parameter_dict['dummy_http_server'] = [
[cls.getManagedResource("content_type_server", ContentTypeHTTPServer).netloc, 1, False],
]
return parameter_dict
def test_gzip_encoding(self):
# type: () -> None
for content_type in (
'text/cache-manifest',
'text/html',
'text/plain',
'text/css',
'application/hal+json',
'application/json',
'application/x-javascript',
'text/xml',
'application/xml',
'application/rss+xml',
'text/javascript',
'application/javascript',
'image/svg+xml',
'application/x-font-ttf',
'application/font-woff',
'application/font-woff2',
'application/x-font-opentype',
'application/wasm',):
resp = requests.get(urllib.parse.urljoin(self.default_balancer_url, content_type), verify=False)
self.assertEqual(resp.headers['Content-Type'], content_type)
self.assertEqual(
resp.headers.get('Content-Encoding'),
'gzip',
'{} uses wrong encoding: {}'.format(content_type, resp.headers.get('Content-Encoding')))
self.assertEqual(resp.text, 'OK')
def test_no_gzip_encoding(self):
# type: () -> None
resp = requests.get(urllib.parse.urljoin(self.default_balancer_url, '/image/png'), verify=False)
self.assertNotIn('Content-Encoding', resp.headers)
self.assertEqual(resp.text, 'OK')
class CaucaseCertificate(ManagedResource):
"""A certificate signed by a caucase service.
"""
ca_crt_file = None # type: str
crl_file = None # type: str
csr_file = None # type: str
cert_file = None # type: str
key_file = None # type: str
def open(self):
# type: () -> None
self.tmpdir = tempfile.mkdtemp()
self.ca_crt_file = os.path.join(self.tmpdir, 'ca-crt.pem')
self.crl_file = os.path.join(self.tmpdir, 'ca-crl.pem')
self.csr_file = os.path.join(self.tmpdir, 'csr.pem')
self.cert_file = os.path.join(self.tmpdir, 'crt.pem')
self.key_file = os.path.join(self.tmpdir, 'key.pem')
def close(self):
# type: () -> None
shutil.rmtree(self.tmpdir)
@property
def _caucase_path(self):
# type: () -> str
"""path of caucase executable.
"""
software_release_root_path = os.path.join(
self._cls.slap._software_root,
hashlib.md5(self._cls.getSoftwareURL().encode()).hexdigest(),
)
return os.path.join(software_release_root_path, 'bin', 'caucase')
def request(self, common_name, caucase):
# type: (str, CaucaseService) -> None
"""Generate certificate and request signature to the caucase service.
This overwrite any previously requested certificate for this instance.
"""
cas_args = [
self._caucase_path,
'--ca-url', caucase.url,
'--ca-crt', self.ca_crt_file,
'--crl', self.crl_file,
]
key = rsa.generate_private_key(
public_exponent=65537,
key_size=2048,
backend=default_backend()
)
with open(self.key_file, 'wb') as f:
f.write(
key.private_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PrivateFormat.TraditionalOpenSSL,
encryption_algorithm=serialization.NoEncryption(),
))
csr = x509.CertificateSigningRequestBuilder().subject_name(
x509.Name([
x509.NameAttribute(
NameOID.COMMON_NAME,
common_name,
),
])).sign(
key,
hashes.SHA256(),
default_backend(),
)
with open(self.csr_file, 'wb') as f:
f.write(csr.public_bytes(serialization.Encoding.PEM))
csr_id = subprocess.check_output(
cas_args + [
'--send-csr', self.csr_file,
],
).split()[0].decode()
assert csr_id
for _ in range(30):
if not subprocess.call(
cas_args + [
'--get-crt', csr_id, self.cert_file,
],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
) == 0:
break
else:
time.sleep(1)
else:
raise RuntimeError('getting service certificate failed.')
with open(self.cert_file) as cert_file:
assert 'BEGIN CERTIFICATE' in cert_file.read()
def revoke(self, caucase):
# type: (CaucaseService) -> None
"""Revoke the client certificate on this caucase instance.
"""
subprocess.check_call([
self._caucase_path,
'--ca-url', caucase.url,
'--ca-crt', self.ca_crt_file,
'--crl', self.crl_file,
'--revoke-crt', self.cert_file, self.key_file,
])
class TestFrontendXForwardedFor(BalancerTestCase):
__partition_reference__ = 'xff'
@classmethod
def _getInstanceParameterDict(cls):
# type: () -> dict
frontend_caucase = cls.getManagedResource('frontend_caucase', CaucaseService)
certificate = cls.getManagedResource('client_certificate', CaucaseCertificate)
certificate.request('shared frontend', frontend_caucase)
parameter_dict = super()._getInstanceParameterDict()
# add another "-auth" backend, that will have ssl-authentication enabled
parameter_dict['zope-family-dict']['default-auth'] = ['dummy_http_server']
parameter_dict['backend-path-dict']['default-auth'] = '/'
parameter_dict['ssl-authentication-dict'] = {
'default': False,
'default-auth': True,
}
parameter_dict['timeout-dict']['default-auth'] = None
parameter_dict['ssl']['frontend-caucase-url-list'] = [frontend_caucase.url]
return parameter_dict
def test_x_forwarded_for_added_when_verified_connection(self):
# type: () -> None
client_certificate = self.getManagedResource('client_certificate', CaucaseCertificate)
for backend in ('default', 'default-auth'):
balancer_url = json.loads(self.computer_partition.getConnectionParameterDict()['_'])[backend]
result = requests.get(
balancer_url,
headers={'X-Forwarded-For': '1.2.3.4'},
cert=(client_certificate.cert_file, client_certificate.key_file),
verify=False,
).json()
self.assertEqual(result['Incoming Headers'].get('x-forwarded-for', '').split(', ')[0], '1.2.3.4')
def test_x_forwarded_for_stripped_when_not_verified_connection(self):
# type: () -> None
balancer_url = json.loads(self.computer_partition.getConnectionParameterDict()['_'])['default']
result = requests.get(
balancer_url,
headers={'X-Forwarded-For': '1.2.3.4'},
verify=False,
).json()
self.assertNotEqual(result['Incoming Headers'].get('x-forwarded-for', '').split(', ')[0], '1.2.3.4')
balancer_url = json.loads(self.computer_partition.getConnectionParameterDict()['_'])['default-auth']
with self.assertRaisesRegex(Exception, "certificate required"):
requests.get(
balancer_url,
headers={'X-Forwarded-For': '1.2.3.4'},
verify=False,
)
class TestServerTLSProvidedCertificate(BalancerTestCase):
"""Check that certificate and key can be provided as instance parameters.
"""
__partition_reference__ = 's'
@classmethod
def _getInstanceParameterDict(cls):
# type: () -> dict
server_caucase = cls.getManagedResource('server_caucase', CaucaseService)
server_certificate = cls.getManagedResource('server_certificate', CaucaseCertificate)
server_certificate.request(cls._ipv4_address, server_caucase)
parameter_dict = super()._getInstanceParameterDict()
with open(server_certificate.cert_file) as f:
parameter_dict['ssl']['cert'] = f.read()
with open(server_certificate.key_file) as f:
parameter_dict['ssl']['key'] = f.read()
return parameter_dict
def test_certificate_validates_with_provided_ca(self):
# type: () -> None
server_certificate = self.getManagedResource("server_certificate", CaucaseCertificate)
requests.get(self.default_balancer_url, verify=server_certificate.ca_crt_file)
class TestClientTLS(BalancerTestCase):
__partition_reference__ = 'c'
@classmethod
def _getInstanceParameterDict(cls):
# type: () -> dict
frontend_caucase1 = cls.getManagedResource('frontend_caucase1', CaucaseService)
certificate1 = cls.getManagedResource('client_certificate1', CaucaseCertificate)
certificate1.request('client_certificate1', frontend_caucase1)
frontend_caucase2 = cls.getManagedResource('frontend_caucase2', CaucaseService)
certificate2 = cls.getManagedResource('client_certificate2', CaucaseCertificate)
certificate2.request('client_certificate2', frontend_caucase2)
parameter_dict = super()._getInstanceParameterDict()
parameter_dict['ssl-authentication-dict'] = {
'default': True,
}
parameter_dict['ssl']['frontend-caucase-url-list'] = [
frontend_caucase1.url,
frontend_caucase2.url,
]
return parameter_dict
def test_refresh_crl(self):
# type: () -> None
logger = self.logger
class DebugLogFile:
def write(self, msg):
logger.info("output from caucase_updater: %s", msg)
def flush(self):
pass
for client_certificate_name, caucase_name in (
('client_certificate1', 'frontend_caucase1'),
('client_certificate2', 'frontend_caucase2'),
):
client_certificate = self.getManagedResource(client_certificate_name,
CaucaseCertificate)
# when client certificate can be authenticated, backend receive the CN of
# the client certificate in "remote-user" header
def _make_request():
# type: () -> dict
return requests.get(
self.default_balancer_url,
cert=(client_certificate.cert_file, client_certificate.key_file),
verify=False,
).json()
self.assertEqual(_make_request()['Incoming Headers'].get('remote-user'),
client_certificate_name)
# when certificate is revoked, updater service should update the CRL
# used by balancer from the caucase service used for client certificates
# (ie. the one used by frontend).
caucase = self.getManagedResource(caucase_name, CaucaseService)
client_certificate.revoke(caucase)
# until the CRL is updated, the client certificate is still accepted.
self.assertEqual(_make_request()['Incoming Headers'].get('remote-user'),
client_certificate_name)
# We have two services, in charge of updating CRL and CA certificates for
# each frontend CA
caucase_updater_list = glob.glob(
os.path.join(
self.computer_partition_root_path,
'etc',
'service',
'caucase-updater-*',
))
self.assertEqual(len(caucase_updater_list), 2)
# find the one corresponding to this caucase
for caucase_updater_candidate in caucase_updater_list:
with open(caucase_updater_candidate) as f:
if caucase.url in f.read():
caucase_updater = caucase_updater_candidate
break
else:
self.fail("Could not find caucase updater script for %s" % caucase.url)
# simulate running updater service in the future, to confirm that it fetches
# the new CRL and make sure balancer uses that new CRL.
process = pexpect.spawnu("faketime +1day %s" % caucase_updater)
process.logfile = DebugLogFile()
process.expect("Got new CRL.*Next wake-up at.*")
process.terminate()
process.wait()
with self.assertRaisesRegex(Exception, 'certificate revoked'):
_make_request()
class TestPathBasedRouting(BalancerTestCase):
"""Check path-based routing rewrites URLs as expected.
"""
__partition_reference__ = 'pbr'
@classmethod
def _getInstanceParameterDict(cls):
# type: () -> dict
parameter_dict = super()._getInstanceParameterDict()
parameter_dict['zope-family-dict'][
'second'
] = parameter_dict['zope-family-dict'][
'default'
]
parameter_dict['timeout-dict']['second'] = None
parameter_dict['ssl-authentication-dict']['second'] = False
# Routing rules outermost slashes mean nothing. They are internally
# stripped and rebuilt in order to correctly represent the request's URL.
parameter_dict['family-path-routing-dict'] = {
'default': [
['foo/bar', 'erp5/boo/far/faz'], # no outermost slashes
['/foo', '/erp5/somewhere'],
['/foo/shadowed', '/foo_shadowed'], # unreachable
['/next', '/erp5/web_site_module/another_next_website'],
],
}
parameter_dict['path-routing-list'] = [
['/next', '/erp5/web_site_module/the_next_website'],
['/next2', '/erp5/web_site_module/the_next2_website'],
['//', '//erp5/web_site_module/123//'], # extraneous slashes
]
return parameter_dict
def test_routing(self):
# type: () -> None
published_dict = json.loads(self.computer_partition.getConnectionParameterDict()['_'])
scheme = 'scheme'
netloc = 'example.com:8080'
prefix = '/VirtualHostBase/' + scheme + '//' + urllib.parse.quote(
netloc,
safe='',
)
# For easier reading of test data, visually separating the virtual host
# base from the virtual host root
vhr = '/VirtualHostRoot'
def assertRoutingEqual(family, path, expected_path):
# type: (str, str, str) -> None
# sanity check: unlike the rules, this test is sensitive to outermost
# slashes, and paths must be absolute-ish for code simplicity.
assert path.startswith('/')
# Frontend is expected to provide URLs with the following path structure:
# /VirtualHostBase/<scheme>//<netloc>/VirtualHostRoot<path>
# where:
# - scheme is the user-input scheme
# - netloc is the user-input netloc
# - path is the user-input path
# Someday, frontends will instead propagate scheme and netloc via other
# means (likely: HTTP headers), in which case this test and the SR will
# need to be amended to reconstruct Virtual Host urls itself, and this
# test will need to be updated accordingly.
self.assertEqual(
requests.get(
urllib.parse.urljoin(published_dict[family], prefix + vhr + path),
verify=False,
).json()['Path'],
expected_path,
)
# Trailing slash presence is preserved.
assertRoutingEqual('default', '/foo/bar', prefix + '/erp5/boo/far/faz' + vhr + '/_vh_foo/_vh_bar')
assertRoutingEqual('default', '/foo/bar/', prefix + '/erp5/boo/far/faz' + vhr + '/_vh_foo/_vh_bar/')
# Subpaths are preserved.
assertRoutingEqual('default', '/foo/bar/hey', prefix + '/erp5/boo/far/faz' + vhr + '/_vh_foo/_vh_bar/hey')
# Rule precedence: later less-specific rules are applied.
assertRoutingEqual('default', '/foo', prefix + '/erp5/somewhere' + vhr + '/_vh_foo')
assertRoutingEqual('default', '/foo/', prefix + '/erp5/somewhere' + vhr + '/_vh_foo/')
assertRoutingEqual('default', '/foo/baz', prefix + '/erp5/somewhere' + vhr + '/_vh_foo/baz')
# Rule precedence: later more-specific rules are meaningless.
assertRoutingEqual('default', '/foo/shadowed', prefix + '/erp5/somewhere' + vhr + '/_vh_foo/shadowed')
# Rule precedence: family rules applied before general rules.
assertRoutingEqual('default', '/next', prefix + '/erp5/web_site_module/another_next_website' + vhr + '/_vh_next')
# Fallback on general rules when no family-specific rule matches
# Note: the root is special in that there is always a trailing slash in the
# produced URL.
assertRoutingEqual('default', '/', prefix + '/erp5/web_site_module/123' + vhr + '/')
# Rule-less family reach general rules.
assertRoutingEqual('second', '/foo/bar', prefix + '/erp5/web_site_module/123' + vhr + '/foo/bar') # Rules match whole-elements, so the rule order does not matter to
# elements which share a common prefix.
assertRoutingEqual('second', '/next', prefix + '/erp5/web_site_module/the_next_website' + vhr + '/_vh_next')
assertRoutingEqual('second', '/next2', prefix + '/erp5/web_site_module/the_next2_website' + vhr + '/_vh_next2')
##############################################################################
#
# Copyright (c) 2022 Nexedi SA and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
import contextlib
import datetime
import glob
import http.client
import json
import os
import shutil
import socket
import ssl
import subprocess
import sys
import tempfile
import time
import unittest
import psutil
import requests
import urllib.parse
import xmlrpc.client
import urllib3
from slapos.testing.utils import CrontabMixin
from . import ERP5InstanceTestCase, setUpModule, matrix, default
setUpModule # pyflakes
class TestPublishedURLIsReachableMixin:
"""Mixin that checks that default page of ERP5 is reachable.
"""
def _checkERP5IsReachable(self, base_url, site_id, verify):
# We access ERP5 trough a "virtual host", which should make
# ERP5 produce URLs using https://virtual-host-name:1234/virtual_host_root
# as base.
virtual_host_url = urllib.parse.urljoin(
base_url,
'/VirtualHostBase/https/virtual-host-name:1234/{}/VirtualHostRoot/_vh_virtual_host_root/'
.format(site_id))
# What happens is that instantiation just create the services, but does not
# wait for ERP5 to be initialized. When this test run ERP5 instance is
# instantiated, but zope is still busy creating the site and haproxy replies
# with 503 Service Unavailable when zope is not started yet, with 404 when
# erp5 site is not created, with 500 when mysql is not yet reachable, so we
# configure this requests session to retry.
# XXX we should probably add a promise instead
with requests.Session() as session:
session.mount(
base_url,
requests.adapters.HTTPAdapter(
max_retries=urllib3.util.retry.Retry(
total=20,
backoff_factor=.5,
status_forcelist=(404, 500, 503))))
r = session.get(virtual_host_url, verify=verify, allow_redirects=False)
self.assertEqual(r.status_code, requests.codes.found)
# access on / are redirected to login form, with virtual host preserved
self.assertEqual(r.headers.get('location'), 'https://virtual-host-name:1234/virtual_host_root/login_form')
# login page can be rendered and contain the text "ERP5"
r = session.get(
urllib.parse.urljoin(base_url, f'{site_id}/login_form'),
verify=verify,
allow_redirects=False,
)
self.assertEqual(r.status_code, requests.codes.ok)
self.assertIn("ERP5", r.text)
def test_published_family_default_v6_is_reachable(self):
"""Tests the IPv6 URL published by the root partition is reachable.
"""
param_dict = self.getRootPartitionConnectionParameterDict()
self._checkERP5IsReachable(
param_dict['family-default-v6'],
param_dict['site-id'],
verify=False,
)
def test_published_family_default_v4_is_reachable(self):
"""Tests the IPv4 URL published by the root partition is reachable.
"""
param_dict = self.getRootPartitionConnectionParameterDict()
self._checkERP5IsReachable(
param_dict['family-default'],
param_dict['site-id'],
verify=False,
)
class TestDefaultParameters(ERP5InstanceTestCase, TestPublishedURLIsReachableMixin):
"""Test ERP5 can be instantiated with no parameters
"""
__partition_reference__ = 'defp'
__test_matrix__ = matrix((default,))
class TestMedusa(ERP5InstanceTestCase, TestPublishedURLIsReachableMixin):
"""Test ERP5 Medusa server
"""
__partition_reference__ = 'medusa'
@classmethod
def getInstanceParameterDict(cls):
return {'_': json.dumps({'wsgi': False})}
class TestJupyter(ERP5InstanceTestCase, TestPublishedURLIsReachableMixin):
"""Test ERP5 Jupyter notebook
"""
__partition_reference__ = 'jupyter'
@classmethod
def getInstanceParameterDict(cls):
return {'_': json.dumps({'jupyter': {'enable': True}})}
def test_jupyter_notebook_is_reachable(self):
param_dict = self.getRootPartitionConnectionParameterDict()
self.assertEqual(
'https://[%s]:8888/tree' % self._ipv6_address,
param_dict['jupyter-url']
)
result = requests.get(
param_dict['jupyter-url'], verify=False, allow_redirects=False)
self.assertEqual(
[requests.codes.found, True, '/login?next=%2Ftree'],
[result.status_code, result.is_redirect, result.headers['Location']]
)
class TestBalancerPorts(ERP5InstanceTestCase):
"""Instantiate with two zope families, this should create for each family:
- a balancer entry point with corresponding haproxy
- a balancer entry point for test runner
"""
__partition_reference__ = 'ap'
@classmethod
def getInstanceParameterDict(cls):
return {
'_':
json.dumps({
"zope-partition-dict": {
"family1": {
"instance-count": 3,
"family": "family1"
},
"family2": {
"instance-count": 5,
"family": "family2"
},
},
})
}
def checkValidHTTPSURL(self, url):
parsed = urllib.parse.urlparse(url)
self.assertEqual(parsed.scheme, 'https')
self.assertTrue(parsed.hostname)
self.assertTrue(parsed.port)
def test_published_family_parameters(self):
# when we request two families, we have two published family-{family_name} URLs
param_dict = self.getRootPartitionConnectionParameterDict()
for family_name in ('family1', 'family2'):
self.checkValidHTTPSURL(
param_dict[f'family-{family_name}'])
self.checkValidHTTPSURL(
param_dict[f'family-{family_name}-v6'])
def test_published_test_runner_url(self):
# each family's also a list of test test runner URLs, by default 3 per family
param_dict = self.getRootPartitionConnectionParameterDict()
for family_name in ('family1', 'family2'):
family_test_runner_url_list = param_dict[
f'{family_name}-test-runner-url-list']
self.assertEqual(3, len(family_test_runner_url_list))
for url in family_test_runner_url_list:
self.checkValidHTTPSURL(url)
def test_zope_listen(self):
# we requested 3 zope in family1 and 5 zopes in family2, we should have 8 zope running.
with self.slap.instance_supervisor_rpc as supervisor:
all_process_info = supervisor.getAllProcessInfo()
self.assertEqual(
3 + 5,
len([p for p in all_process_info if p['name'].startswith('zope-')]))
def test_haproxy_listen(self):
# We have 2 families, haproxy should listen to a total of 3 ports per family
# normal access on ipv4 and ipv6 and test runner access on ipv4 only
with self.slap.instance_supervisor_rpc as supervisor:
all_process_info = supervisor.getAllProcessInfo()
process_info, = (p for p in all_process_info if p['name'].startswith('haproxy-'))
haproxy_master_process = psutil.Process(process_info['pid'])
haproxy_worker_process, = haproxy_master_process.children()
self.assertEqual(
sorted([socket.AF_INET] * 4 + [socket.AF_INET6] * 2),
sorted(
c.family
for c in haproxy_worker_process.connections()
if c.status == 'LISTEN'
))
class TestSeleniumTestRunner(ERP5InstanceTestCase, TestPublishedURLIsReachableMixin):
"""Test ERP5 can be instantiated with selenium server for test runner.
"""
__partition_reference__ = 'sel'
@classmethod
def getInstanceParameterDict(cls):
return {
'_':
json.dumps({
'test-runner': {
'selenium': {
"target": "selenium-server",
"server-url": "https://example.com",
"verify-server-certificate": False,
"desired-capabilities": {
"browserName": "firefox",
"version": "68.0.2esr",
}
}
}
})
}
def test_test_runner_configuration_json_file(self):
runUnitTest_script, = glob.glob(
self.computer_partition_root_path + "/../*/bin/runUnitTest.real")
config_file = None
with open(runUnitTest_script) as f:
for line in f:
if 'ERP5_TEST_RUNNER_CONFIGURATION' in line:
_, config_file = line.split('=')
assert config_file
with open(config_file.strip()) as f:
self.assertEqual(
f.read(),
json.dumps(json.loads(self.getInstanceParameterDict()['_'])['test-runner'], sort_keys=True))
class TestDisableTestRunner(ERP5InstanceTestCase, TestPublishedURLIsReachableMixin):
"""Test ERP5 can be instantiated without test runner.
"""
__partition_reference__ = 'distr'
@classmethod
def getInstanceParameterDict(cls):
return {'_': json.dumps({'test-runner': {'enabled': False}})}
def test_no_runUnitTestScript(self):
"""No runUnitTest script should be generated in any partition.
"""
# self.computer_partition_root_path is the path of root partition.
# we want to assert that no scripts exist in any partition.
bin_programs = list(map(os.path.basename,
glob.glob(self.computer_partition_root_path + "/../*/bin/*")))
self.assertTrue(bin_programs) # just to check the glob was correct.
self.assertNotIn('runUnitTest', bin_programs)
self.assertNotIn('runTestSuite', bin_programs)
def test_no_haproxy_testrunner_port(self):
# Haproxy only listen on two ports, there is no haproxy ports allocated for test runner
with self.slap.instance_supervisor_rpc as supervisor:
all_process_info = supervisor.getAllProcessInfo()
process_info, = (p for p in all_process_info if p['name'].startswith('haproxy'))
haproxy_master_process = psutil.Process(process_info['pid'])
haproxy_worker_process, = haproxy_master_process.children()
self.assertEqual(
sorted([socket.AF_INET, socket.AF_INET6]),
sorted(
c.family
for c in haproxy_worker_process.connections()
if c.status == 'LISTEN'
))
class TestZopeNodeParameterOverride(ERP5InstanceTestCase, TestPublishedURLIsReachableMixin):
"""Test override zope node parameters
"""
__partition_reference__ = 'override'
__test_matrix__ = matrix((default,))
@classmethod
def getInstanceParameterDict(cls):
# The following example includes the most commonly used options,
# but not necessarily in a meaningful way.
return {'_': json.dumps({
"zodb": [{
"type": "zeo",
"server": {},
"cache-size-bytes": "20MB",
"cache-size-bytes!": [
("bb-0", 1<<20),
("bb-.*", "500MB"),
],
"pool-timeout": "10m",
"storage-dict": {
"cache-size!": [
("a-.*", "50MB"),
],
},
}],
"zope-partition-dict": {
"a": {
"instance-count": 3,
},
"bb": {
"instance-count": 5,
"port-base": 2300,
},
},
})}
def test_zope_conf(self):
zeo_addr = json.loads(
self.getComputerPartition('zodb').getConnectionParameter('_')
)["storage-dict"]["root"]["server"]
def checkParameter(line, kw):
k, v = line.split()
self.assertFalse(k.endswith('!'), k)
try:
expected = kw.pop(k)
except KeyError:
if k == 'server':
return
self.assertIsNotNone(expected)
self.assertEqual(str(expected), v)
def checkConf(zodb, storage):
zodb["mount-point"] = "/"
zodb["pool-size"] = 4
zodb["pool-timeout"] = "10m"
storage["storage"] = "root"
storage["server"] = zeo_addr
with open(f'{partition}/etc/zope-{zope}.conf') as f:
conf = list(map(str.strip, f.readlines()))
i = conf.index("<zodb_db root>") + 1
conf = iter(conf[i:conf.index("</zodb_db>", i)])
for line in conf:
if line == '<zeoclient>':
for line in conf:
if line == '</zeoclient>':
break
checkParameter(line, storage)
for k, v in storage.items():
self.assertIsNone(v, k)
del storage
else:
checkParameter(line, zodb)
for k, v in zodb.items():
self.assertIsNone(v, k)
partition = self.getComputerPartitionPath('zope-a')
for zope in range(3):
checkConf({
"cache-size-bytes": "20MB",
}, {
"cache-size": "50MB",
})
partition = self.getComputerPartitionPath('zope-bb')
for zope in range(5):
checkConf({
"cache-size-bytes": "500MB" if zope else 1<<20,
}, {
"cache-size": None,
})
class TestWatchActivities(ERP5InstanceTestCase):
"""Tests for bin/watch_activities scripts in zope partitions.
"""
__partition_reference__ = 'wa'
def test(self):
# "watch_activities" scripts use watch command. We'll fake a watch command
# that executes the actual command only once to check the output.
tmpdir = tempfile.mkdtemp()
self.addCleanup(shutil.rmtree, tmpdir)
with open(os.path.join(tmpdir, 'watch'), 'w') as f:
f.write("""#!/bin/sh
if [ "$1" != "-n" ] || [ "$2" != "5" ]
then
echo unexpected arguments: "$1" "$2"
exit 1
fi
shift
shift
exec bash -c "$@"
""")
os.fchmod(f.fileno(), 0o700)
try:
output = subprocess.check_output(
[
os.path.join(
self.getComputerPartitionPath('zope-1'),
'bin',
'watch_activities',
)
],
env=dict(os.environ,
PATH=os.pathsep.join([tmpdir, os.environ['PATH']])),
stderr=subprocess.STDOUT,
text=True,
)
except subprocess.CalledProcessError as e:
self.fail(e.output)
self.assertIn(' dict ', output)
class ZopeSkinsMixin:
"""Mixins with utility methods to test zope behaviors.
"""
@classmethod
def _setUpClass(cls):
super()._setUpClass()
cls._waitForActivities()
@classmethod
def _waitForActivities(cls, timeout=datetime.timedelta(minutes=10).total_seconds()):
"""Wait for ERP5 to be ready and have processed all activities.
"""
for _ in range(int(timeout / 5)):
with cls.getXMLRPCClient() as erp5_xmlrpc_client:
try:
if erp5_xmlrpc_client.portal_activities.countMessage() == 0:
break
except (xmlrpc.client.ProtocolError,
xmlrpc.client.Fault,
http.client.HTTPException):
pass
time.sleep(5)
else:
raise AssertionError("Timeout waiting for activities")
@classmethod
def _getAuthenticatedZopeUrl(cls, path, family_name='default'):
"""Returns a URL to access a zope family through balancer,
with credentials in the URL.
path is joined with urllib.parse.urljoin to the URL of the portal.
"""
param_dict = cls.getRootPartitionConnectionParameterDict()
parsed = urllib.parse.urlparse(param_dict['family-' + family_name])
base_url = parsed._replace(
netloc='{}:{}@{}:{}'.format(
param_dict['inituser-login'],
param_dict['inituser-password'],
parsed.hostname,
parsed.port,
),
path=param_dict['site-id'] + '/',
).geturl()
return urllib.parse.urljoin(base_url, path)
@classmethod
@contextlib.contextmanager
def getXMLRPCClient(cls):
# don't verify certificate
ssl_context = ssl.create_default_context()
ssl_context.check_hostname = False
ssl_context.verify_mode = ssl.CERT_NONE
erp5_xmlrpc_client = xmlrpc.client.ServerProxy(
cls._getAuthenticatedZopeUrl(''),
context=ssl_context,
)
with erp5_xmlrpc_client:
yield erp5_xmlrpc_client
@classmethod
def _addPythonScript(cls, script_id, params, body):
with cls.getXMLRPCClient() as erp5_xmlrpc_client:
custom = erp5_xmlrpc_client.portal_skins.custom
try:
custom.manage_addProduct.PythonScripts.manage_addPythonScript(
script_id)
except xmlrpc.client.ProtocolError as e:
if e.errcode != 302:
raise
getattr(custom, script_id).ZPythonScriptHTML_editAction(
'',
'',
params,
body,
)
class ZopeTestMixin(ZopeSkinsMixin, CrontabMixin):
"""Mixin class for zope features.
"""
wsgi = NotImplemented # type: bool
__partition_reference__ = 'z'
@classmethod
def getInstanceParameterDict(cls):
return {
'_':
json.dumps({
"zope-partition-dict": {
"default": {
"longrequest-logger-interval": 1,
"longrequest-logger-timeout": 1,
},
"multiple": {
"family": "multiple",
"instance-count": 3,
"port-base": 2210,
},
},
"wsgi": cls.wsgi,
})
}
@classmethod
def _setUpClass(cls):
super()._setUpClass()
cls.zope_base_url = cls._getAuthenticatedZopeUrl('')
param_dict = cls.getRootPartitionConnectionParameterDict()
cls.zope_deadlock_debugger_url = cls._getAuthenticatedZopeUrl(
'/manage_debug_threads?{deadlock-debugger-password}'.format(
**param_dict))
# a python script to verify activity processing
cls._addPythonScript(
script_id='ERP5Site_verifyActivityProcessing',
params='mode',
body='''if 1:
import json
portal = context.getPortalObject()
if mode == "count":
return json.dumps(dict(count=len(portal.portal_activities.getMessageList())))
if mode == "activate":
for _ in range(10):
portal.portal_templates.activate(activity="SQLQueue").getTitle()
return "activated"
raise ValueError("Unknown mode: %s" % mode)
''',
)
cls.zope_verify_activity_processing_url = urllib.parse.urljoin(
cls.zope_base_url,
'ERP5Site_verifyActivityProcessing',
)
# a python script logging to event log
cls._addPythonScript(
script_id='ERP5Site_logMessage',
params='name',
body='''if 1:
from erp5.component.module.Log import log
return log("hello %s" % name)
''',
)
cls.zope_log_message_url = urllib.parse.urljoin(
cls.zope_base_url,
'ERP5Site_logMessage',
)
# a python script issuing a long request
cls._addPythonScript(
script_id='ERP5Site_executeLongRequest',
params='',
body='''if 1:
import time
for _ in range(5):
time.sleep(1)
return "done"
''',
)
cls.zope_long_request_url = urllib.parse.urljoin(
cls.zope_base_url,
'ERP5Site_executeLongRequest',
)
def setUp(self):
super().setUp()
# run logrotate a first time so that it create state files
self._executeCrontabAtDate('logrotate', '2000-01-01')
def tearDown(self):
super().tearDown()
# reset logrotate status
logrotate_status = os.path.join(
self.getComputerPartitionPath('zope-default'),
'srv',
'logrotate.status',
)
if os.path.exists(logrotate_status):
os.unlink(logrotate_status)
for logfile in glob.glob(
os.path.join(
self.getComputerPartitionPath('zope-default'),
'srv',
'backup',
'logrotate',
'*',
)):
os.unlink(logfile)
for logfile in glob.glob(
os.path.join(
self.getComputerPartitionPath('zope-default'),
'srv',
'monitor',
'private',
'documents',
'*',
)):
os.unlink(logfile)
def _getCrontabCommand(self, crontab_name):
# type: (str) -> str
"""Read a crontab and return the command that is executed.
overloaded to use crontab from zope partition
"""
with open(
os.path.join(
self.getComputerPartitionPath('zope-default'),
'etc',
'cron.d',
crontab_name,
)) as f:
crontab_spec, = f.readlines()
self.assertNotEqual(crontab_spec[0], '@', crontab_spec)
return crontab_spec.split(None, 5)[-1]
def test_event_log_rotation(self):
requests.get(
self.zope_log_message_url,
params={
"name": "world"
},
verify=False,
).raise_for_status()
zope_event_log_path = os.path.join(
self.getComputerPartitionPath('zope-default'),
'var',
'log',
'zope-0-event.log',
)
with open(zope_event_log_path) as f:
self.assertIn('hello world', f.read())
self._executeCrontabAtDate('logrotate', '2050-01-01')
# this logrotate leaves the log for the day as non compressed
rotated_log_file = os.path.join(
self.getComputerPartitionPath('zope-default'),
'srv',
'backup',
'logrotate',
'zope-0-event.log-20500101',
)
with open(rotated_log_file) as f:
self.assertIn('hello world', f.read())
requests.get(
self.zope_log_message_url,
params={
"name": "le monde"
},
verify=False,
).raise_for_status()
with open(zope_event_log_path) as f:
self.assertNotIn('hello world', f.read())
with open(zope_event_log_path) as f:
self.assertIn('hello le monde', f.read())
# on next day execution of logrotate, log files are compressed
self._executeCrontabAtDate('logrotate', '2050-01-02')
self.assertTrue(os.path.exists(rotated_log_file + '.xz'))
self.assertFalse(os.path.exists(rotated_log_file))
def test_access_log_rotation(self):
requests.get(
self.zope_base_url,
verify=False,
headers={
'User-Agent': 'before rotation'
},
).raise_for_status()
zope_access_log_path = os.path.join(
self.getComputerPartitionPath('zope-default'),
'var',
'log',
'zope-0-Z2.log',
)
with open(zope_access_log_path) as f:
self.assertIn('before rotation', f.read())
self._executeCrontabAtDate('logrotate', '2050-01-01')
# this logrotate leaves the log for the day as non compressed
rotated_log_file = os.path.join(
self.getComputerPartitionPath('zope-default'),
'srv',
'backup',
'logrotate',
'zope-0-Z2.log-20500101',
)
with open(rotated_log_file) as f:
self.assertIn('before rotation', f.read())
requests.get(
self.zope_base_url,
verify=False,
headers={
'User-Agent': 'after rotation'
},
).raise_for_status()
with open(zope_access_log_path) as f:
self.assertNotIn('before rotation', f.read())
with open(zope_access_log_path) as f:
self.assertIn('after rotation', f.read())
# on next day execution of logrotate, log files are compressed
self._executeCrontabAtDate('logrotate', '2050-01-02')
self.assertTrue(os.path.exists(rotated_log_file + '.xz'))
self.assertFalse(os.path.exists(rotated_log_file))
def test_long_request_log_rotation(self):
requests.get(self.zope_long_request_url,
verify=False,
params={
'when': 'before rotation'
}).raise_for_status()
zope_long_request_log_path = os.path.join(
self.getComputerPartitionPath('zope-default'),
'var',
'log',
'longrequest_logger_zope-0.log',
)
with open(zope_long_request_log_path) as f:
self.assertIn('before rotation', f.read())
self._executeCrontabAtDate('logrotate', '2050-01-01')
# this logrotate leaves the log for the day as non compressed
rotated_log_file = os.path.join(
self.getComputerPartitionPath('zope-default'),
'srv',
'backup',
'logrotate',
'longrequest_logger_zope-0.log-20500101',
)
with open(rotated_log_file) as f:
self.assertIn('before rotation', f.read())
requests.get(
self.zope_long_request_url,
verify=False,
params={
'when': 'after rotation'
},
).raise_for_status()
with open(zope_long_request_log_path) as f:
self.assertNotIn('before rotation', f.read())
with open(zope_long_request_log_path) as f:
self.assertIn('after rotation', f.read())
# on next day execution of logrotate, log files are compressed
self._executeCrontabAtDate('logrotate', '2050-01-02')
self.assertTrue(os.path.exists(rotated_log_file + '.xz'))
self.assertFalse(os.path.exists(rotated_log_file))
def test_basic_authentication_user_in_access_log(self):
param_dict = self.getRootPartitionConnectionParameterDict()
requests.get(self.zope_base_url,
verify=False,
auth=requests.auth.HTTPBasicAuth(
param_dict['inituser-login'],
param_dict['inituser-password'],
)).raise_for_status()
zope_access_log_path = os.path.join(
self.getComputerPartitionPath('zope-default'),
'var',
'log',
'zope-0-Z2.log',
)
with open(zope_access_log_path) as f:
self.assertIn(param_dict['inituser-login'], f.read())
def test_deadlock_debugger(self):
dump_response = requests.get(
self.zope_deadlock_debugger_url,
verify=False,
)
dump_response.raise_for_status()
self.assertIn('Thread ', dump_response.text)
def test_activity_processing(self):
def wait_for_activities(max_retries):
for retry in range(max_retries):
time.sleep(10)
resp = requests.get(
self.zope_verify_activity_processing_url,
params={
'mode': 'count',
'retry': retry,
},
verify=False,
)
if not resp.ok:
# XXX we start by flushing existing activities from site creation
# and initial upgrader run. During this time it may happen that
# ERP5 replies with site errors, we tolerate these errors and only
# check the final state.
continue
count = resp.json()['count']
if not count:
break
else:
self.assertEqual(count, 0)
wait_for_activities(60)
requests.get(
self.zope_verify_activity_processing_url,
params={
'mode': 'activate'
},
verify=False,
).raise_for_status()
wait_for_activities(10)
def test_multiple_zope_family_log_files(self):
logfiles = [
os.path.basename(p) for p in glob.glob(
os.path.join(
self.getComputerPartitionPath('zope-multiple'), 'var', 'log', '*'))
]
self.assertEqual(
sorted([l for l in logfiles if l.startswith('zope')]), [
'zope-0-Z2.log',
'zope-0-event.log',
'zope-0-neo-root.log',
'zope-1-Z2.log',
'zope-1-event.log',
'zope-1-neo-root.log',
'zope-2-Z2.log',
'zope-2-event.log',
'zope-2-neo-root.log',
] if '_neo' in self.__class__.__name__ else [
'zope-0-Z2.log',
'zope-0-event.log',
'zope-1-Z2.log',
'zope-1-event.log',
'zope-2-Z2.log',
'zope-2-event.log',
])
class TestZopeMedusa(ZopeTestMixin, ERP5InstanceTestCase):
wsgi = False
class TestZopeWSGI(ZopeTestMixin, ERP5InstanceTestCase):
wsgi = True
@unittest.expectedFailure
def test_long_request_log_rotation(self):
super().test_long_request_log_rotation()
@unittest.expectedFailure
def test_basic_authentication_user_in_access_log(self):
super().test_basic_authentication_user_in_access_log()
class TestZopePublisherTimeout(ZopeSkinsMixin, ERP5InstanceTestCase):
__partition_reference__ = 't'
@classmethod
def getInstanceParameterDict(cls):
return {
'_':
json.dumps({
# a default timeout of 3
"publisher-timeout": 3,
# and a family without timeout
"family-override": {
"no-timeout": {
"publisher-timeout": None,
},
},
"zope-partition-dict": {
# a family to process activities, so that our test
# does not hit a zope node processing activities
"activity": {
"family": "activity",
},
"default": {
"family": "default",
"port-base": 2210,
},
"no-timeout": {
"family": "no-timeout",
"port-base": 22220,
},
},
})
}
@classmethod
def _setUpClass(cls):
super()._setUpClass()
cls._addPythonScript(
'ERP5Site_doSlowRequest',
'',
'''if 1:
import time
def recurse(o):
time.sleep(0.1)
for sub in o.objectValues():
recurse(sub)
recurse(context.getPortalObject())
'''
)
def test_long_request_interupted_on_default_family(self):
ret = requests.get(self._getAuthenticatedZopeUrl(
'ERP5Site_doSlowRequest', family_name='default'), verify=False)
self.assertIn('TimeoutReachedError', ret.text)
self.assertEqual(ret.status_code, requests.codes.server_error)
def test_long_request_not_interupted_on_no_timeout_family(self):
with self.assertRaises(requests.exceptions.Timeout):
requests.get(
self._getAuthenticatedZopeUrl('ERP5Site_doSlowRequest', family_name='no-timeout'),
verify=False,
timeout=6)
class TestCloudooo(ZopeSkinsMixin, ERP5InstanceTestCase):
"""Test ERP5 can be instantiated with cloudooo parameters
"""
__partition_reference__ = 'c'
@classmethod
def getInstanceParameterDict(cls):
return {
'_':
json.dumps({
'cloudooo-url-list': [
'https://cloudooo1.example.com/',
'https://cloudooo2.example.com/',
],
'cloudooo-retry-count': 123,
})
}
def test_cloudooo_url_list_preference(self):
self.assertEqual(
requests.get(
self._getAuthenticatedZopeUrl(
'portal_preferences/getPreferredDocumentConversionServerUrlList'),
verify=False).text,
"['https://cloudooo1.example.com/', 'https://cloudooo2.example.com/']")
@unittest.expectedFailure # setting "retry" is not implemented
def test_cloudooo_retry_count_preference(self):
self.assertEqual(
requests.get(
self._getAuthenticatedZopeUrl(
'portal_preferences/getPreferredDocumentConversionServerRetry'),
verify=False).text,
"123")
class TestCloudoooDefaultParameter(ZopeSkinsMixin, ERP5InstanceTestCase):
"""Test default ERP5 cloudooo parameters
"""
__partition_reference__ = 'cd'
def test_cloudooo_url_list_preference(self):
self.assertIn(
requests.get(
self._getAuthenticatedZopeUrl(
'portal_preferences/getPreferredDocumentConversionServerUrlList'),
verify=False).text,
[
"['https://cloudooo1.erp5.net/', 'https://cloudooo.erp5.net/']",
"['https://cloudooo.erp5.net/', 'https://cloudooo1.erp5.net/']",
])
@unittest.expectedFailure # default value of "retry" does not match schema
def test_cloudooo_retry_count_preference(self):
self.assertEqual(
requests.get(
self._getAuthenticatedZopeUrl(
'portal_preferences/getPreferredDocumentConversionServerRetry'),
verify=False).text,
"2")
##############################################################################
#
# Copyright (c) 2018 Nexedi SA and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
import os
import json
import glob
import urllib.parse
import socket
import sys
import time
import contextlib
import datetime
import subprocess
import gzip
import lzma
import MySQLdb
from slapos.testing.utils import CrontabMixin
from slapos.testing.utils import getPromisePluginParameterDict
from . import ERP5InstanceTestCase
from . import setUpModule
from . import matrix
from . import default
setUpModule # pyflakes
class MariaDBTestCase(ERP5InstanceTestCase):
"""Base test case for mariadb tests.
"""
__partition_reference__ = 'm'
# We explicitly specify 'mariadb' as our software type here,
# therefore we don't request ZODB. We therefore don't
# need to run these tests with both NEO and ZEO mode,
# it wouldn't make any difference.
# https://lab.nexedi.com/nexedi/slapos/blob/273037c8/stack/erp5/instance.cfg.in#L216-230
__test_matrix__ = matrix((default,))
@classmethod
def getInstanceSoftwareType(cls):
return "mariadb"
@classmethod
def _getInstanceParameterDict(cls):
# type: () -> dict
return {
'tcpv4-port': 3306,
'max-connection-count': 5,
'long-query-time': 3,
'max-slowqueries-threshold': 1,
'slowest-query-threshold': 0.1,
# XXX what is this ? should probably not be needed here
'name': cls.__name__,
'monitor-passwd': 'secret',
# XXX should probably not be needed here
'computer-memory-percent-threshold': 100,
}
@classmethod
def getInstanceParameterDict(cls):
# type: () -> dict
return {'_': json.dumps(cls._getInstanceParameterDict())}
def getDatabaseConnection(self):
# type: () -> MySQLdb.connections.Connection
connection_parameter_dict = json.loads(
self.computer_partition.getConnectionParameterDict()['_'])
db_url = urllib.parse.urlparse(connection_parameter_dict['database-list'][0])
self.assertEqual('mysql', db_url.scheme)
self.assertTrue(db_url.path.startswith('/'))
database_name = db_url.path[1:]
return MySQLdb.connect(
user=db_url.username,
passwd=db_url.password,
host=db_url.hostname,
port=db_url.port,
db=database_name,
use_unicode=True,
charset='utf8mb4'
)
class TestCrontabs(MariaDBTestCase, CrontabMixin):
_save_instance_file_pattern_list = \
MariaDBTestCase._save_instance_file_pattern_list + (
'*/srv/backup/*',
)
def test_full_backup(self):
# type: () -> None
self._executeCrontabAtDate('mariadb-backup', '2050-01-01')
full_backup_file, = glob.glob(
os.path.join(
self.computer_partition_root_path,
'srv',
'backup',
'mariadb-full',
'205001010000??.sql.gz',
))
with gzip.open(full_backup_file, 'rt') as dump:
self.assertIn('CREATE TABLE', dump.read())
def test_logrotate_and_slow_query_digest(self):
# type: () -> None
# slow query digest needs to run after logrotate, since it operates on the rotated
# file, so this tests both logrotate and slow query digest.
# run logrotate a first time so that it create state files
self._executeCrontabAtDate('logrotate', '2000-01-01')
# make two slow queries. We are using long-query-time=3, so the queries
# must take more than 3 seconds to be logged.
cnx = self.getDatabaseConnection()
with contextlib.closing(cnx):
cnx.query("SELECT SLEEP(3.1)")
cnx.store_result()
cnx.query("SELECT SLEEP(3.2)")
# slow query crontab depends on crontab for log rotation
# to be executed first.
self._executeCrontabAtDate('logrotate', '2050-01-01')
# this logrotate leaves the log for the day as non compressed
rotated_log_file = os.path.join(
self.computer_partition_root_path,
'srv',
'backup',
'logrotate',
'mariadb_slowquery.log-20500101',
)
self.assertTrue(os.path.exists(rotated_log_file))
# then crontab to generate slow query report is executed
self._executeCrontabAtDate('generate-mariadb-slow-query-report', '2050-01-01')
# and it creates a report for the day
slow_query_report = os.path.join(
self.computer_partition_root_path,
'srv',
'monitor',
'private',
'slowquery_digest',
'slowquery_digest.txt-2050-01-01.xz',
)
with lzma.open(slow_query_report, 'rt') as f:
# this is the hash for our "select sleep(n)" slow query
self.assertIn("ID 0xF9A57DD5A41825CA", f.read())
# on next day execution of logrotate, log files are compressed
self._executeCrontabAtDate('logrotate', '2050-01-02')
self.assertTrue(os.path.exists(rotated_log_file + '.xz'))
self.assertFalse(os.path.exists(rotated_log_file))
# there's a promise checking that the threshold is not exceeded
# and it reports a problem since we set a threshold of 1 slow query
check_slow_query_promise_plugin = getPromisePluginParameterDict(
os.path.join(
self.computer_partition_root_path,
'etc',
'plugin',
'check-slow-query-pt-digest-result.py',
))
with self.assertRaises(subprocess.CalledProcessError) as error_context:
subprocess.check_output(
'faketime 2050-01-01 %s' % check_slow_query_promise_plugin['command'],
text=True,
shell=True)
self.assertEqual(
error_context.exception.output,
"Threshold is lower than expected: \n"
"Expected total queries : 1.0 and current is: 2\n"
"Expected slowest query : 0.1 and current is: 3\n",
)
class TestMariaDB(MariaDBTestCase):
def test_utf8_collation(self):
# type: () -> None
cnx = self.getDatabaseConnection()
with contextlib.closing(cnx):
cnx.query(
"""
CREATE TABLE test_utf8_collation (
col1 CHAR(10)
)
""")
cnx.store_result()
cnx.query(
"""
insert into test_utf8_collation values ("à"), ("あ")
""")
cnx.store_result()
cnx.query(
"""
select * from test_utf8_collation where col1 = "a"
""")
self.assertEqual((('à',),), cnx.store_result().fetch_row(maxrows=2))
class TestMroonga(MariaDBTestCase):
def test_mroonga_plugin_loaded(self):
# type: () -> None
cnx = self.getDatabaseConnection()
with contextlib.closing(cnx):
cnx.query("show plugins")
plugins = cnx.store_result().fetch_row(maxrows=1000)
self.assertIn(
('Mroonga', 'ACTIVE', 'STORAGE ENGINE', 'ha_mroonga.so', 'GPL'),
plugins)
def test_mroonga_normalize_udf(self):
# type: () -> None
# example from https://mroonga.org/docs/reference/udf/mroonga_normalize.html#usage
cnx = self.getDatabaseConnection()
with contextlib.closing(cnx):
cnx.query(
"""
SELECT mroonga_normalize("ABCDあぃうぇ㍑")
""")
# XXX this is returned as bytes by mroonga/mariadb (this might be a bug)
self.assertEqual((('abcdあぃうぇリットル'.encode(),),),
cnx.store_result().fetch_row(maxrows=2))
if 0:
# this example fail with:
# OperationalError: (1123, "Can't initialize function 'mroonga_normalize'; mroonga_normalize(): nonexistent normalizer NormalizerMySQLUnicodeCIExceptKanaCI")
# same error on mroonga "official" docker images using mysql
# https://hub.docker.com/layers/groonga/mroonga/latest/images/sha256-e5a979801c95544ca3a1228d2c4d819820850e0162649553f2e94850e5e1c988?context=explore
# so it's probably OK to ignore
cnx.query(
"""
SELECT mroonga_normalize("aBcDあぃウェ㍑", "NormalizerMySQLUnicodeCIExceptKanaCIKanaWithVoicedSoundMark")
""")
self.assertEqual((('ABCDあぃうぇ㍑'.encode(),),),
cnx.store_result().fetch_row(maxrows=2))
def test_mroonga_full_text_normalizer(self):
# type: () -> None
# example from https://mroonga.org//docs/tutorial/storage.html#how-to-specify-the-normalizer
cnx = self.getDatabaseConnection()
with contextlib.closing(cnx):
cnx.query("SET NAMES utf8")
cnx.store_result()
cnx.query(
"""
CREATE TABLE diaries (
day DATE PRIMARY KEY,
content VARCHAR(64) NOT NULL,
FULLTEXT INDEX (content) COMMENT 'normalizer "NormalizerAuto"'
) Engine=Mroonga DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
""")
cnx.store_result()
cnx.query(
"""INSERT INTO diaries VALUES ("2013-04-23", "ブラックコーヒーを飲んだ。")""")
cnx.store_result()
cnx.query(
"""
SELECT *
FROM diaries
WHERE MATCH (content) AGAINST ("+ふらつく" IN BOOLEAN MODE)
""")
self.assertEqual((), cnx.store_result().fetch_row(maxrows=2))
cnx.query(
"""
SELECT *
FROM diaries
WHERE MATCH (content) AGAINST ("+ブラック" IN BOOLEAN MODE)
""")
self.assertEqual(
((datetime.date(2013, 4, 23), 'ブラックコーヒーを飲んだ。'),),
cnx.store_result().fetch_row(maxrows=2),
)
def test_mroonga_full_text_normalizer_TokenBigramSplitSymbolAlphaDigit(self):
# type: () -> None
# Similar to as ERP5's testI18NSearch with erp5_full_text_mroonga_catalog
cnx = self.getDatabaseConnection()
with contextlib.closing(cnx):
cnx.query(
"""
CREATE TABLE `full_text` (
`uid` BIGINT UNSIGNED NOT NULL,
`SearchableText` MEDIUMTEXT,
PRIMARY KEY (`uid`),
FULLTEXT `SearchableText` (`SearchableText`) COMMENT 'parser "TokenBigramSplitSymbolAlphaDigit"'
) ENGINE=mroonga
""")
cnx.store_result()
cnx.query(
"""
INSERT INTO full_text VALUES
(1, "Gabriel Fauré Quick brown fox jumps over the lazy dog"),
(2, "武者小路 実篤 Slow white fox jumps over the diligent dog."),
(3, "( - + )")""")
cnx.store_result()
cnx.query(
"""
SELECT uid
FROM full_text
WHERE MATCH (`full_text`.`SearchableText`) AGAINST ('*D+ Faure' IN BOOLEAN MODE)
""")
self.assertEqual(((1,),), cnx.store_result().fetch_row(maxrows=2))
cnx.query(
"""
SELECT uid
FROM full_text
WHERE MATCH (`full_text`.`SearchableText`) AGAINST ('*D+ 武者' IN BOOLEAN MODE)
""")
self.assertEqual(((2,),), cnx.store_result().fetch_row(maxrows=2))
cnx.query(
"""
SELECT uid
FROM full_text
WHERE MATCH (`full_text`.`SearchableText`) AGAINST ('*D+ +quick +fox +dog' IN BOOLEAN MODE)
""")
self.assertEqual(((1,),), cnx.store_result().fetch_row(maxrows=2))
def test_mroonga_full_text_stem(self):
# type: () -> None
# example from https://mroonga.org//docs/tutorial/storage.html#how-to-specify-the-token-filters
cnx = self.getDatabaseConnection()
with contextlib.closing(cnx):
cnx.query("SELECT mroonga_command('register token_filters/stem')")
self.assertEqual(((b'true',),), cnx.store_result().fetch_row(maxrows=2))
cnx.query(
"""
CREATE TABLE memos (
id INT NOT NULL PRIMARY KEY,
content TEXT NOT NULL,
FULLTEXT INDEX (content) COMMENT 'normalizer "NormalizerAuto", token_filters "TokenFilterStem"'
) Engine=Mroonga DEFAULT CHARSET=utf8
""")
cnx.store_result()
cnx.query(
"""INSERT INTO memos VALUES (1, "I develop Groonga"), (2, "I'm developing Groonga"), (3, "I developed Groonga")"""
)
cnx.store_result()
cnx.query(
"""
SELECT *
FROM memos
WHERE MATCH (content) AGAINST ("+develops" IN BOOLEAN MODE)
""")
self.assertEqual([
(1, "I develop Groonga"),
(2, "I'm developing Groonga"),
(3, "I developed Groonga"),
], list(sorted(cnx.store_result().fetch_row(maxrows=4))))
# Copyright (C) 2022 Nexedi SA and Contributors.
#
# This program is free software: you can Use, Study, Modify and Redistribute
# it under the terms of the GNU General Public License version 3, or (at your
# option) any later version, as published by the Free Software Foundation.
#
# You can also Link and Combine this program with other software covered by
# the terms of any of the Free Software licenses or any of the Open Source
# Initiative approved licenses and Convey the resulting work. Corresponding
# source of such a combination shall include the source code for all other
# software used.
#
# This program is distributed WITHOUT ANY WARRANTY; without even the implied
# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
#
# See COPYING file for full licensing terms.
# See https://www.nexedi.com/licensing for rationale and options.
import json
import os.path
import unittest
from slapos.grid.utils import md5digest
from . import ERP5InstanceTestCase, setUpModule as _setUpModule
from .test_erp5 import TestPublishedURLIsReachableMixin
# skip tests when software release is built with wendelin.core 1.
def setUpModule():
_setUpModule()
cls = ERP5InstanceTestCase
if not os.path.exists(
os.path.join(
cls.slap.software_directory,
md5digest(cls.getSoftwareURL()),
'bin', 'wcfs')):
raise unittest.SkipTest("built with wendelin.core 1")
class TestWCFS(ERP5InstanceTestCase, TestPublishedURLIsReachableMixin):
"""Test Wendelin Core File System
"""
__partition_reference__ = 'wcfs'
# Only run in ZEO mode; don't run with NEO.
# Current NEO/py and NEO/go versions have interoperability
# issues. Once these issues are fixed the following
# lines have to be removed so that test case runs agains NEO.
# Please see the following MR for more context:
# https://lab.nexedi.com/nexedi/slapos/merge_requests/1283#note_174854
@classmethod
def setUpClass(cls):
if json.loads(cls.getInstanceParameterDict()["_"])['zodb'][0]["type"] == "neo":
raise unittest.SkipTest("Not yet fixed WCFS+NEO interoperability issue.")
super().setUpClass()
@classmethod
def getInstanceParameterDict(cls):
return {'_': json.dumps({'wcfs': {'enable': True}})}
def test_wcfs_accessible(self):
"""Verify that wcfs filesystem is basically accessible.
- we can read .wcfs/zurl
- its content is equal to published `serving-zurl`
"""
zurl = json.loads(
self.getComputerPartition('wcfs').getConnectionParameter('_')
)['serving-zurl']
mntpt = lookupMount(zurl)
zurl_ = readfile("%s/.wcfs/zurl" % mntpt)
self.assertEqual(zurl_, zurl)
# lookupMount returns /proc/mount entry for wcfs mounted to serve zurl.
def lookupMount(zurl):
for line in readfile('/proc/mounts').splitlines():
# <zurl> <mountpoint> fuse.wcfs ...
zurl_, mntpt, typ, _ = line.split(None, 3)
if typ != 'fuse.wcfs':
continue
if zurl_ == zurl:
return mntpt
raise KeyError("lookup mount %s: no /proc/mounts entry" % zurl)
# readfile returns content of file @path.
def readfile(path):
with open(path) as f:
return f.read()
Upgrade tests for ERP5 software release
##############################################################################
#
# Copyright (c) 2020 Nexedi SA and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
from setuptools import setup, find_packages
version = '0.0.1.dev0'
name = 'slapos.test.upgrade_erp5'
with open("README.md") as f:
long_description = f.read()
setup(name=name,
version=version,
description="Upgrade test for SlapOS' ERP5 software release",
long_description=long_description,
long_description_content_type='text/markdown',
maintainer="Nexedi",
maintainer_email="info@nexedi.com",
url="https://lab.nexedi.com/nexedi/slapos",
packages=find_packages(),
install_requires=[
'slapos.core',
'supervisor',
'slapos.libnetworkcache',
],
test_suite='test',
)
##############################################################################
#
# Copyright (c) 2020 Nexedi SA and Contributors. All Rights Reserved.
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly adviced to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
##############################################################################
import contextlib
import glob
import json
import os
import ssl
import sys
import tempfile
import time
import requests
import urllib.parse
import xmlrpc.client
import urllib3
from slapos.grid.utils import md5digest
from slapos.testing.testcase import (
SlapOSNodeCommandError,
installSoftwareUrlList,
makeModuleSetUpAndTestCaseClass,
)
old_software_release_url = 'https://lab.nexedi.com/nexedi/slapos/raw/1.0.167.8/software/erp5/software.cfg'
new_software_release_url = os.path.abspath(
os.path.join(os.path.dirname(__file__), '..', 'software.cfg'))
_, SlapOSInstanceTestCase = makeModuleSetUpAndTestCaseClass(
old_software_release_url,
software_id="upgrade_erp5",
skip_software_check=True,
)
def setUpModule():
installSoftwareUrlList(
SlapOSInstanceTestCase,
[old_software_release_url, new_software_release_url],
debug=SlapOSInstanceTestCase._debug,
)
class ERP5UpgradeTestCase(SlapOSInstanceTestCase):
# use short partition names for unix sockets
__partition_reference__ = 'u'
@classmethod
def setUpOldInstance(cls):
"""setUp hook executed while to old instance is running, before update
"""
pass
_current_software_url = old_software_release_url
@classmethod
def getSoftwareURL(cls):
return cls._current_software_url
@classmethod
def setUpClass(cls):
# request and instantiate with old software url
super().setUpClass()
cls.setUpOldInstance()
# request instance on new software
cls._current_software_url = new_software_release_url
cls.logger.debug('requesting instance on new software')
cls.requestDefaultInstance()
# wait for slapos node instance
snapshot_name = "{}.{}.setUpClass new instance".format(
cls.__module__, cls.__name__)
try:
if cls._debug and cls.instance_max_retry:
try:
cls.slap.waitForInstance(max_retry=cls.instance_max_retry - 1)
except SlapOSNodeCommandError:
cls.slap.waitForInstance(debug=True)
else:
cls.slap.waitForInstance(
max_retry=cls.instance_max_retry, debug=cls._debug)
cls.logger.debug("instance on new software done")
except BaseException:
cls.logger.exception("Error during instance on new software")
cls._storeSystemSnapshot(snapshot_name)
cls._cleanup(snapshot_name)
cls.setUp = lambda self: self.fail('Setup Class failed.')
raise
else:
cls._storeSystemSnapshot(snapshot_name)
cls.computer_partition = cls.requestDefaultInstance()
class TestERP5Upgrade(ERP5UpgradeTestCase):
@classmethod
def tearDownClass(cls):
cls.session.close()
super().tearDownClass()
@classmethod
def setUpOldInstance(cls):
cls._default_instance_old_parameter_dict = param_dict = json.loads(
cls.computer_partition.getConnectionParameterDict()['_'])
# use a session to retry on failures, when ERP5 is not ready.
# (see also TestPublishedURLIsReachableMixin)
cls.session = requests.Session()
cls.session.mount(
param_dict['family-default-v6'],
requests.adapters.HTTPAdapter(
max_retries=urllib3.util.retry.Retry(
total=20,
backoff_factor=.1,
status_forcelist=(404, 500, 503),
)))
# rebuild an url with user and password
parsed = urllib.parse.urlparse(param_dict['family-default'])
cls.authenticated_zope_base_url = parsed._replace(
netloc='{}:{}@{}:{}'.format(
param_dict['inituser-login'],
param_dict['inituser-password'],
parsed.hostname,
parsed.port,
),
path=param_dict['site-id'] + '/',
).geturl()
cls.zope_base_url = '{family_default_v6}/{site_id}'.format(
family_default_v6=param_dict['family-default-v6'],
site_id=param_dict['site-id'],
)
# wait for old site creation
cls.session.get(
f'{cls.zope_base_url}/person_module',
auth=requests.auth.HTTPBasicAuth(
username=param_dict['inituser-login'],
password=param_dict['inituser-password'],
),
verify=False,
allow_redirects=False,
).raise_for_status()
# Create scripts to create test data and search catalog for test data.
@contextlib.contextmanager
def getXMLRPCClient():
# don't verify certificate
ssl_context = ssl.create_default_context()
ssl_context.check_hostname = False
ssl_context.verify_mode = ssl.CERT_NONE
erp5_xmlrpc_client = xmlrpc.client.ServerProxy(
cls.authenticated_zope_base_url,
context=ssl_context,
)
with erp5_xmlrpc_client:
yield erp5_xmlrpc_client
def addPythonScript(script_id, params, body):
with getXMLRPCClient() as erp5_xmlrpc_client:
custom = erp5_xmlrpc_client.portal_skins.custom
try:
custom.manage_addProduct.PythonScripts.manage_addPythonScript(
script_id)
except xmlrpc.client.ProtocolError as e:
if e.errcode != 302:
raise
getattr(custom, script_id).ZPythonScriptHTML_editAction(
'',
'',
params,
body,
)
# a python script to create a person with a name
addPythonScript(
script_id='ERP5Site_createTestPerson',
params='name',
body='''if 1:
portal = context.getPortalObject()
portal.person_module.newContent(
first_name=name,
)
return 'Done.'
''',
)
# a python script to search for persons by name
addPythonScript(
script_id='ERP5Site_searchTestPerson',
params='name',
body='''if 1:
import json
portal = context.getPortalObject()
result = [brain.getObject().getTitle() for brain in portal.portal_catalog(
portal_type='Person',
title=name,)]
assert result # raise so that we retry until indexed
return json.dumps(result)
''',
)
cls.session.post(
'{zope_base_url}/ERP5Site_createTestPerson'.format(
zope_base_url=cls.zope_base_url),
auth=requests.auth.HTTPBasicAuth(
username=param_dict['inituser-login'],
password=param_dict['inituser-password'],
),
data={
'name': 'before upgrade'
},
verify=False,
allow_redirects=False,
).raise_for_status()
assert cls.session.get(
'{zope_base_url}/ERP5Site_searchTestPerson'.format(
zope_base_url=cls.zope_base_url),
auth=requests.auth.HTTPBasicAuth(
username=param_dict['inituser-login'],
password=param_dict['inituser-password'],
),
params={
'name': 'before upgrade'
},
verify=False,
allow_redirects=False,
).json() == ['before upgrade']
def test_published_url_is_same(self):
default_instance_new_parameter_dict = json.loads(
self.computer_partition.getConnectionParameterDict()['_'])
self.assertEqual(
default_instance_new_parameter_dict['family-default-v6'],
self._default_instance_old_parameter_dict['family-default-v6'],
)
def test_published_url_is_reachable(self):
default_instance_new_parameter_dict = json.loads(
self.computer_partition.getConnectionParameterDict()['_'])
# get certificate from caucase
with tempfile.NamedTemporaryFile(
prefix="ca.crt.pem",
mode="w",
delete=False,
) as ca_cert:
ca_cert.write(
requests.get(
urllib.parse.urljoin(
default_instance_new_parameter_dict['caucase-http-url'],
'/cas/crt/ca.crt.pem',
)).text)
ca_cert.flush()
self.session.get(
'{}/{}/login_form'.format(
default_instance_new_parameter_dict['family-default-v6'],
default_instance_new_parameter_dict['site-id'],
),
verify=False,
# TODO: we don't use caucase yet here.
# verify=ca_cert.name,
).raise_for_status()
def test_all_instances_use_new_software_release(self):
self.assertEqual(
{
os.path.basename(os.readlink(sr))
for sr in glob.glob(
os.path.join(
self.slap.instance_directory,
'*',
'software_release',
))
},
{md5digest(self.getSoftwareURL())},
)
def test_catalog_available(self):
param_dict = json.loads(
self.computer_partition.getConnectionParameterDict()['_'])
# data created before upgrade is available
self.assertEqual(
self.session.get(
'{zope_base_url}/ERP5Site_searchTestPerson'.format(
zope_base_url=self.zope_base_url),
auth=requests.auth.HTTPBasicAuth(
username=param_dict['inituser-login'],
password=param_dict['inituser-password'],
),
params={
'name': 'before upgrade'
},
verify=False,
allow_redirects=False,
).json(), ['before upgrade'])
# create data after upgrade
self.session.post(
'{zope_base_url}/ERP5Site_createTestPerson'.format(
zope_base_url=self.zope_base_url),
auth=requests.auth.HTTPBasicAuth(
username=param_dict['inituser-login'],
password=param_dict['inituser-password'],
),
data={
'name': 'after upgrade'
},
verify=False,
allow_redirects=False,
).raise_for_status()
# new data can also be found
self.assertEqual(
self.session.get(
'{zope_base_url}/ERP5Site_searchTestPerson'.format(
zope_base_url=self.zope_base_url),
auth=requests.auth.HTTPBasicAuth(
username=param_dict['inituser-login'],
password=param_dict['inituser-password'],
),
params={
'name': 'after upgrade'
},
verify=False,
allow_redirects=False,
).json(), ['after upgrade'])
(in no special order)
General:
- ipv6 support (besides frontend-backend apache connection)
requires important changes at ERP5 level
- resilience
- mariadb: make x509 mandatory (needs ZMySQLD*A support)
- make postfix log inside partition
- document postfix parameters (only once it actually works)
Backups:
- flush binlogs independently from full backups (in addition to anyway flushing them on full backup creation)
- rotate tidstorage consistency points
- make mysql backup path an instance parameter
- make srv/backup/zodb the default value for a parameter (zodb{ 'backup_root': ...} or so) to have a single value to modify to relocate zodb backups of a partition
- make srv/backup/logrotate customisable (per partition, otherwise files will overwrite each other)
[buildout]
extends =
# Exact version of Zope
ztk-versions.cfg
zope-versions.cfg
buildout.hash.cfg
../../component/fonts/buildout.cfg
../../component/git/buildout.cfg
../../component/ghostscript/buildout.cfg
../../component/graphviz/buildout.cfg
../../component/gzip/buildout.cfg
../../component/xz-utils/buildout.cfg
../../component/haproxy/buildout.cfg
../../component/socat/buildout.cfg
../../component/rsyslogd/buildout.cfg
../../component/findutils/buildout.cfg
../../component/librsvg/buildout.cfg
../../component/imagemagick/buildout.cfg
../../component/jpegoptim/buildout.cfg
../../component/kumo/buildout.cfg
../../component/libdmtx/buildout.cfg
../../component/matplotlib/buildout.cfg
../../component/numpy/buildout.cfg
../../component/statsmodels/buildout.cfg
../../component/h5py/buildout.cfg
../../component/ocropy/buildout.cfg
../../component/optipng/buildout.cfg
../../component/pandas/buildout.cfg
../../component/percona-toolkit/buildout.cfg
../../component/patch/buildout.cfg
../../component/pillow/buildout.cfg
../../component/pycrypto-python/buildout.cfg
../../component/pysvn-python/buildout.cfg
../../component/python-ldap-python/buildout.cfg
../../component/scikit-learn/buildout.cfg
../../component/scikit-image/buildout.cfg
../../component/PyWavelets/buildout.cfg
../../component/subversion/buildout.cfg
../../component/tempstorage/buildout.cfg
../../component/tesseract/buildout.cfg
../../component/w3m/buildout.cfg
../../component/poppler/buildout.cfg
../../component/zabbix/buildout.cfg
../../component/sed/buildout.cfg
../../component/coreutils/buildout.cfg
../../component/grep/buildout.cfg
../../component/dash/buildout.cfg
../../component/bash/buildout.cfg
../../component/wget/buildout.cfg
../../component/aspell/buildout.cfg
../../component/jsl/buildout.cfg
../../component/6tunnel/buildout.cfg
../../component/userhosts/buildout.cfg
../../component/postfix/buildout.cfg
../../component/zbarlight/buildout.cfg
../../component/pylint/buildout.cfg
../../component/perl-Image-ExifTool/buildout.cfg
../../component/wendelin.core/buildout.cfg
../../component/jupyter-py2/buildout.cfg
../../component/pygolang/buildout.cfg
../../component/bcrypt/buildout.cfg
../../component/python-pynacl/buildout.cfg
../../component/python-xmlsec/buildout.cfg
../../component/selenium/buildout.cfg
../../stack/caucase/buildout.cfg
../../software/neoppod/software-common.cfg
# keep neoppod extends last
parts +=
erp5-util-develop
slapos-cookbook
mroonga-mariadb
tesseract
zabbix-agent
.coveragerc
# Buildoutish
eggs-all-scripts
testrunner
test-suite-runner
# get git repositories
genbt5list
# some additional utils
zodbpack
# Create instance template
template
# jupyter
jupyter-notebook-initialized-scripts
# override instance-jupyter-notebook not to render into default template.cfg
[instance-jupyter-notebook]
output = ${buildout:directory}/template-jupyter.cfg
[download-base]
<= download-base-neo
url = ${:_profile_base_location_}/${:filename}
[mariadb-start-clone-from-backup]
<= download-base
[mariadb-resiliency-after-import-script]
<= download-base
[mariadb-slow-query-report-script]
<= download-base
[template-mariadb]
<= download-base
link-binary =
${coreutils:location}/bin/basename
${coreutils:location}/bin/cat
${coreutils:location}/bin/cp
${coreutils:location}/bin/ls
${coreutils:location}/bin/tr
${coreutils:location}/bin/uname
${gettext:location}/lib/gettext/hostname
${grep:location}/bin/grep
${sed:location}/bin/sed
${mariadb:location}/bin/mysqlbinlog
[template-kumofs]
<= download-base
[template-zope-conf]
<= download-base
[site-zcml]
<= download-base
[template-my-cnf]
<= download-base
[template-mariadb-initial-setup]
<= download-base
[template-postfix]
< = download-base
[template-postfix-master-cf]
< = download-base
[template-postfix-main-cf]
< = download-base
[template-postfix-aliases]
< = download-base
[template-run-zelenium]
< = download-base
[template]
recipe = slapos.recipe.template:jinja2
# XXX: "template.cfg" is hardcoded in instanciation recipe
output = ${buildout:directory}/template.cfg
url = ${:_profile_base_location_}/${:filename}
context =
key mariadb_link_binary template-mariadb:link-binary
key zope_link_binary template-zope:link-binary
key zope_fonts template-zope:fonts
key zope_fontconfig_includes template-zope:fontconfig-includes
key apache_location apache:location
key bin_directory buildout:bin-directory
key buildout_bin_directory buildout:bin-directory
key caucase_jinja2_library caucase-jinja2-library:target
key coreutils_location coreutils:location
key curl_location curl:location
key cyrus_sasl_location cyrus-sasl:location
key dash_location dash:location
key bash_location bash:location
key dcron_location dcron:location
key default_cloudooo_url_list erp5-defaults:cloudooo-connection-url-list
key erp5_location erp5:location
key findutils_location findutils:location
key gzip_location gzip:location
key xz_utils_location xz-utils:location
key haproxy_location haproxy:location
key socat_location socat:location
key rsyslogd_location rsyslogd:location
key instance_common_cfg instance-common:output
key jupyter_enable_default erp5-defaults:jupyter-enable-default
key wcfs_enable_default erp5-defaults:wcfs-enable-default
key kumo_location kumo:location
key local_bt5_repository local-bt5-repository:list
key logrotate_location logrotate:location
key mariadb_location mariadb:location
key mariadb_resiliency_after_import_script mariadb-resiliency-after-import-script:target
key mariadb_slow_query_report_script mariadb-slow-query-report-script:target
key mariadb_start_clone_from_backup mariadb-start-clone-from-backup:target
key mroonga_mariadb_install_sql mroonga-mariadb:install-sql
key mroonga_mariadb_plugin_dir mroonga-mariadb:plugin-dir
key groonga_plugin_dir groonga:groonga-plugin-dir
key groonga_mysql_normalizer_plugin_dir groonga-normalizer-mysql:groonga-plugin-dir
key matplotlibrc_location matplotlibrc:location
key parts_directory buildout:parts-directory
key openssl_location openssl:location
key percona_toolkit_location percona-toolkit:location
key perl_dbd_mariadb_path perl-DBD-mariadb:perl-PATH
key postfix_location postfix:location
key root_common root-common:target
key site_zcml site-zcml:target
key sixtunnel_location 6tunnel:location
key template_run_zelenium template-run-zelenium:target
key egg_interpreter erp5-python-interpreter:interpreter
key template_apache_conf template-apache-backend-conf:target
key template_balancer template-balancer:target
key template_erp5 template-erp5:target
key template_haproxy_cfg template-haproxy-cfg:target
key template_rsyslogd_cfg template-rsyslogd-cfg:target
key template_jupyter_cfg instance-jupyter-notebook:output
key template_kumofs template-kumofs:target
key template_mariadb template-mariadb:target
key template_mariadb_initial_setup template-mariadb-initial-setup:target
key template_my_cnf template-my-cnf:target
key template_mysqld_wrapper template-mysqld-wrapper:output
key template_postfix template-postfix:target
key template_postfix_aliases template-postfix-aliases:target
key template_postfix_main_cf template-postfix-main-cf:target
key template_postfix_master_cf template-postfix-master-cf:target
key instance_wcfs_cfg_in instance-wcfs.cfg.in:target
key template_zeo template-zeo:target
key template_zodb_base template-zodb-base:target
key template_zope template-zope:target
key template_zope_conf template-zope-conf:target
key template_fonts_conf template-fonts-conf:output
key userhosts_location userhosts:location
key unixodbc_location unixodbc:location
key wget_location wget:location
key extra_path_list eggs:extra-paths
key python_executable_for_kernel erp5-python-interpreter-jupyter:interpreter_path
key erp5_kernel_location erp5-kernel:location
key erp5_kernel_filename erp5-kernel:filename
key kernel_json_location kernel-json:location
key kernel_json_filename kernel-json:filename
[template-erp5]
<= download-base
[template-zeo]
<= download-base
[template-zodb-base]
<= download-base
[template-zope]
<= download-base
link-binary =
${aspell-en-dictionary:bin-aspell}
${dmtx-utils:location}/bin/dmtxwrite
${git:location}/bin/git
${graphviz:location}/bin/dot
${grep:location}/bin/grep
${imagemagick:location}/bin/convert
${ghostscript:location}/bin/gs
${imagemagick:location}/bin/identify
${jpegoptim:location}/bin/jpegoptim
${jsl:location}/bin/jsl
${librsvg:location}/bin/rsvg-convert
${mariadb:location}/bin/mysql
${mariadb:location}/bin/mysqldump
${openssl:location}/bin/openssl
${optipng:location}/bin/optipng
${perl-Image-ExifTool:location}/bin/exiftool
${poppler:location}/bin/pdfinfo
${poppler:location}/bin/pdftohtml
${poppler:location}/bin/pdftotext
${python2.7:location}/bin/2to3
${sed:location}/bin/sed
${tesseract:location}/bin/tesseract
${w3m:location}/bin/w3m
fonts =
${liberation-fonts:location}
${ipaex-fonts:location}
${ipa-fonts:location}
${ocrb-fonts:location}
${android-fonts:location}
fontconfig-includes =
${fontconfig:location}/etc/fonts/conf.d
[template-balancer]
<= download-base
[template-haproxy-cfg]
<= download-base
[template-rsyslogd-cfg]
<= download-base
[instance-wcfs.cfg.in]
<= download-base
[erp5-bin]
<= erp5
repository = https://lab.nexedi.com/nexedi/erp5-bin.git
branch = master
[erp5-doc]
<= erp5
repository = https://lab.nexedi.com/nexedi/erp5-doc.git
branch = master
[bt5-repository]
# Format:
# <url or path> [...]
#
# Use absolute paths for local repositories, and URLs for non-local otherwise.
#
list = ${local-bt5-repository:list}
[local-bt5-repository]
# Same as bt5-repository, but only local repository.
# Used to generate bt5lists.
list = ${erp5:location}/bt5 ${erp5:location}/product/ERP5/bootstrap ${erp5-bin:location}/bt5 ${erp5-doc:location}/bt5
[genbt5list]
recipe = plone.recipe.command
stop-on-error = true
genbt5list = ${erp5:location}/product/ERP5/bin/genbt5list
command =
echo '${local-bt5-repository:list}' |xargs ${buildout:executable} ${:genbt5list}
update-command = ${:command}
[erp5_repository_list]
repository_id_list = erp5 erp5-bin erp5-doc
# ERP5 defaults, which can be overridden in inheriting recipes (e.g. wendelin)
[erp5-defaults]
# Default cloudooo is https://cloudooo.erp5.net/ and https://cloudooo1.erp5.net/ in random order.
# The random order is applied in instance profile
cloudooo-connection-url-list =
https://cloudooo.erp5.net/
https://cloudooo1.erp5.net/
# Jupyter is by default disabled in ERP5
jupyter-enable-default = false
# WCFS is by default disabled in ERP5
wcfs-enable-default = false
[erp5]
recipe = slapos.recipe.build:gitclone
repository = https://lab.nexedi.com/nexedi/erp5.git
branch = master
git-executable = ${git:location}/bin/git
[testrunner]
# XXX: Workaround for fact ERP5Type is not an distribution and does not
# expose entry point for test runner
recipe = zc.recipe.egg
eggs = ${eggs:eggs}
extra-paths = ${eggs:extra-paths}
entry-points =
runUnitTest=runUnitTest:main
scripts = runUnitTest
initialization =
import glob, os, sys, json
buildout_directory = '''${buildout:directory}'''
parts_directory = '''${buildout:parts-directory}'''
repository_id_list = \
'''${erp5_repository_list:repository_id_list}'''.split()[::-1]
# read testrunner configuration from slapos instance parameters to
# configure coverage if enabled.
with open(os.environ['ERP5_TEST_RUNNER_CONFIGURATION']) as f:
test_runner_configuration = json.load(f)
test_runner_configuration.setdefault('coverage', {})
test_runner_configuration['coverage'].setdefault('enabled', False)
coverage_process = None
if test_runner_configuration['coverage']['enabled']:
test_runner_configuration['coverage'].setdefault(
'include', [os.path.join('parts', repo, '*') for repo in repository_id_list])
assets_directory = ''
test_name = sys.argv[-1].replace(':', '_')
if os.environ.get('SLAPOS_TEST_LOG_DIRECTORY'):
assets_directory = os.path.join(os.environ['SLAPOS_TEST_LOG_DIRECTORY'], test_name)
if not os.path.exists(assets_directory):
os.makedirs(assets_directory)
coverage_data_file = os.path.abspath(
os.path.join(assets_directory, 'coverage.sqlite3'))
curdir = os.path.abspath(os.curdir)
# change current directory when importing coverage so that it considers paths
# relative to the root of the software
os.chdir(buildout_directory)
import coverage
coverage_process = coverage.Coverage(
include=test_runner_configuration['coverage']['include'],
data_file=coverage_data_file,
branch=test_runner_configuration['coverage'].get('branch'),
)
coverage_process.set_option('run:relative_files', 'true')
coverage_process.set_option('run:plugins', ['erp5_coverage_plugin'])
coverage_process.start()
os.chdir(curdir)
import Products
Products.__path__[:0] = filter(None,
os.getenv('INSERT_PRODUCTS_PATH', '').split(os.pathsep))
os.environ['ZOPE_SCRIPTS'] = ''
os.environ['erp5_tests_bt5_path'] = ','.join(
os.path.join(parts_directory, x, 'bt5') for x in repository_id_list)
extra_path_list = '''${:extra-paths}'''.split()
sys.path[:0] = sum((
glob.glob(os.path.join(x, 'tests'))
for x in extra_path_list), [])
sys.path[:0] = sum((
glob.glob(os.path.join(x, 'Products', '*', 'tests'))
for x in extra_path_list), [])
sys.path[:0] = sum((
glob.glob(os.path.join(x, 'Products', '*', 'tests'))
for x in os.getenv('INSERT_PRODUCTS_PATH', '').split(os.pathsep)), [])
import runUnitTest
try:
sys.exit(runUnitTest.main())
finally:
if coverage_process:
coverage_process.stop()
coverage_process.save()
# upload the coverage so that they can be combined from another machine
upload_url = test_runner_configuration['coverage'].get('upload-url')
if upload_url:
import requests
import time
import uritemplate
from six.moves.urllib.parse import urlparse
auth_list = (None, )
parsed_url = urlparse(upload_url)
if parsed_url.username:
# try Digest and Basic authentication and retry 5 times to tolerate transiant errors
auth_list = (
requests.auth.HTTPDigestAuth(parsed_url.username, parsed_url.password),
requests.auth.HTTPBasicAuth(parsed_url.username, parsed_url.password),
) * 5
url = uritemplate.URITemplate(upload_url).expand(
test_name=test_name,
# Environment variables are set in parts/erp5/product/ERP5Type/tests/runTestSuite.py
test_result_id=os.environ.get('ERP5_TEST_RESULT_ID', 'unknown_test_result_id'),
test_result_revision=os.environ.get('ERP5_TEST_RESULT_REVISION', 'unknown_test_result_revision'),
)
for auth in auth_list:
with open(coverage_data_file, 'rb') as f:
resp = requests.put(url, data=f, auth=auth)
if resp.ok:
# print just the hostname, not to include the auth part
print('Uploaded coverage data to {parsed_url.hostname}'.format(parsed_url=parsed_url))
break
print('Error {resp.status_code} uploading coverage data to {parsed_url.hostname} with {auth.__class__.__name__}'.format(
resp=resp, parsed_url=parsed_url, auth=auth))
time.sleep(1)
else:
sys.stderr.write('Error uploading coverage data to {parsed_url.hostname}\n'.format(parsed_url=parsed_url))
[.coveragerc]
recipe = slapos.recipe.template
output = ${buildout:directory}/${:_buildout_section_name_}
inline =
# coverage configuration file, useful when making html report
[run]
plugins =
erp5_coverage_plugin
relative_files = true
[test-suite-runner]
# XXX: Workaround for fact ERP5Type is not an distribution and does not
# expose entry point for test runner
recipe = zc.recipe.egg
eggs = ${eggs:eggs}
extra-paths = ${eggs:extra-paths}
entry-points =
runTestSuite=Products.ERP5Type.tests.runTestSuite:main
scripts = runTestSuite
initialization =
import os
import sys
import Products
[Products.__path__.insert(0, p) for p in reversed(os.environ.get('INSERT_PRODUCTS_PATH', '').split(':')) if p]
os.environ['ZOPE_SCRIPTS'] = ''
repository_id_list = list(reversed('''${erp5_repository_list:repository_id_list}'''.split()))
sys.path[0:0] = ['/'.join(['''${buildout:parts-directory}''', x]) for x in repository_id_list]
[erp5-python-interpreter]
<= python-interpreter
# a python interpreter with all eggs available, usable for the software release but also
# for external tools (such as python extension in theia).
eggs += ${eggs:eggs}
extra-paths += ${eggs:extra-paths}
[erp5-python-interpreter-jupyter]
<= erp5-python-interpreter
interpreter = pythonwitheggs_jupyter
interpreter_path = ${buildout:directory}/bin/${:interpreter}
eggs +=
jupyter_client
jupyter_core
ipython_genutils
ipykernel
ipywidgets
requests
[zope-product-with-eggtestinfo]
recipe = zc.recipe.egg:custom
setup-eggs =
eggtestinfo
egg = ${:_buildout_section_name_}
[Products.CMFUid]
<= zope-product-with-eggtestinfo
[Products.CMFActionIcons]
<= zope-product-with-eggtestinfo
[Products.CMFCalendar]
<= zope-product-with-eggtestinfo
[Products.CMFCore]
<= zope-product-with-eggtestinfo
[Products.CMFDefault]
<= zope-product-with-eggtestinfo
[Products.CMFTopic]
<= zope-product-with-eggtestinfo
[Products.DCWorkflow]
<= zope-product-with-eggtestinfo
Products.DCWorkflow-patches = ${:_profile_base_location_}/../../component/egg-patch/Products.DCWorkflow/workflow_method.patch#975b49e96bae33ac8563454fe5fa9899
Products.DCWorkflow-patch-options = -p1
[Products.GenericSetup]
<= zope-product-with-eggtestinfo
[eggs]
<= neoppod
eggs = ${neoppod:eggs}
${caucase-eggs:eggs}
${wendelin.core:egg}
${numpy:egg}
${matplotlib:egg}
${lxml-python:egg}
${ocropy:egg}
${pandas:egg}
${pillow-python:egg}
${python-ldap-python:egg}
${python-xmlsec:egg}
${pysvn-python:egg}
${pycrypto-python:egg}
${scipy:egg}
${scikit-learn:egg}
${scikit-image:egg}
sympy
more-itertools
${h5py:egg}
openpyxl
${statsmodels:egg}
${zbarlight:egg}
lock_file
astor
APacheDEX
PyStemmer
Pympler
SOAPpy
chardet
collective.recipe.template
erp5diff
interval
ipdb
Jinja2
jsonschema
mechanize
mock
oauthlib
objgraph
${python-pynacl:egg}
${bcrypt:egg}
paramiko
ply
pyflakes
PyPDF2
python-magic
python-memcached
pytz
requests
responses
urlnorm
uuid
xml_marshaller
xupdate_processor
feedparser
validictory
erp5.util
z3c.etestbrowser
huBarcode
qrcode
spyne
httplib2
suds
pprofile
pycountry
xfw
jsonschema
${selenium:egg}
pytesseract
decorator
networkx
# Needed for checking ZODB Components source code
${astroid:egg}
${pylint:egg}
jedi
yapf
typing
pytracemalloc
xlrd
# Zope
Zope2
${tempstorage:egg}
# Zope acquisition patch
Acquisition
# for runzeo
${ZEO:egg}
# Other Zope 2 packages
Products.PluggableAuthService
Products.PluginRegistry
# CMF 2.2
${Products.CMFActionIcons:egg}
${Products.CMFCalendar:egg}
${Products.CMFCore:egg}
${Products.CMFDefault:egg}
${Products.CMFTopic:egg}
${Products.CMFUid:egg}
${Products.DCWorkflow:egg}
${Products.GenericSetup:egg}
five.localsitemanager
# Other products
Products.MimetypesRegistry
Products.TIDStorage
Products.LongRequestLogger
# BBB: Temporarily keep zope.app.testing awaiting we use newer version of CMF
# (for tests like testCookieCrumbler).
zope.app.testing
# Currently forked in our repository
# Products.PortalTransforms
# Dependency for our fork of PortalTransforms
StructuredText
# Needed for parsing .po files from our Localizer subset
polib
# Needed for Google OAuth
google-api-python-client
# Need for Facebook OAuth
facebook-sdk
# Used by ERP5 Jupyter backend
ipykernel
# Used by DiffTool
deepdiff
unidiff
# WSGI server
zope.globalrequest
waitress
# OpenId Connect
oic
# json schema validation
strict-rfc3339
jsonschema[format]
entry-points =
runwsgi=Products.ERP5.bin.zopewsgi:runwsgi
scripts =
apachedex
performance_tester_erp5
runwsgi
runzope
runzeo
tidstoraged
tidstorage_repozo
wcfs
web_checker_utility
extra-paths =
${erp5:location}
# patches for eggs
patch-binary = ${patch:location}/bin/patch
PyPDF2-patches = ${:_profile_base_location_}/../../component/egg-patch/PyPDF2/0001-Custom-implementation-of-warnings.formatwarning-remo.patch#d25bb0f5dde7f3337a0a50c2f986f5c8
PyPDF2-patch-options = -p1
Acquisition-patches = ${:_profile_base_location_}/../../component/egg-patch/Acquisition/aq_dynamic-2.13.12.patch#1d9a56e9af4371f5b6951ebf217a15d7
Acquisition-patch-options = -p1
python-magic-patches = ${:_profile_base_location_}/../../component/egg-patch/python_magic/magic.patch#de0839bffac17801e39b60873a6c2068
python-magic-patch-options = -p1
# neoppod installs bin/coverage so we inject erp5 plugin here so that coverage script can use it in report
[neoppod]
eggs +=
erp5_coverage_plugin
[eggs-all-scripts]
recipe = zc.recipe.egg
eggs =
munnel
patch-binary = ${eggs:patch-binary}
# develop erp5.util from parts/erp5/
[erp5-util-develop]
recipe = zc.recipe.egg:develop
setup = ${erp5:location}
[zodbpack]
recipe = zc.recipe.egg
eggs =
slapos.toolbox[zodbpack]
scripts =
zodbpack
depends =
${slapos-toolbox-dependencies:eggs}
[versions]
# See ../../software/neoppod/software-common.cfg for versions common with NEO:
# neoppod, mysqlclient, slapos.recipe.template
# patched eggs
Acquisition = 2.13.12+SlapOSPatched001
Products.DCWorkflow = 2.2.4+SlapOSPatched001
ocropy = 1.0+SlapOSPatched001
pysvn = 1.9.15+SlapOSPatched001
python-ldap = 2.4.32+SlapOSPatched001
python-magic = 0.4.12+SlapOSPatched001
PyPDF2 = 1.26.0+SlapOSPatched001
## https://lab.nexedi.com/nexedi/slapos/merge_requests/648
pylint = 1.4.4+SlapOSPatched002
# astroid 1.4.1 breaks testDynamicClassGeneration
astroid = 1.3.8+SlapOSPatched001
# use newer version than specified in ZTK
PasteDeploy = 1.5.2
argparse = 1.4.0
zope.dottedname = 4.1.0
# modified version that works fine for buildout installation
SOAPpy = 0.12.0nxd001
# CMF 2.3 is not yet supported.
Products.CMFCalendar = 2.2.3
Products.CMFCore = 2.2.10
Products.CMFDefault = 2.2.4
Products.CMFTopic = 2.2.1
Products.CMFUid = 2.2.1
# newer version requires zope.traversing>=4.0.0a2.
zope.app.appsetup = 3.16.0
# newer version requires zope.i18n>=4.0.0a3
zope.app.publication = 3.14.0
# newer version requires zope.testbrowser>=4
zope.app.testing = 3.8.1
# Pinned versions
APacheDEX = 1.8
Pillow = 6.2.2
Products.CMFActionIcons = 2.1.3
Products.GenericSetup = 1.8.6
Products.LongRequestLogger = 2.1.0
# Products.MimetypesRegistry 2.1 requires AccessControl>=3.0.0Acquisition.
Products.MimetypesRegistry = 2.0.10
Products.PluggableAuthService = 1.10.0
Products.PluginRegistry = 1.4
Products.TIDStorage = 5.5.0
pyPdf = 1.13
PyStemmer = 1.3.0
Pympler = 0.4.3
StructuredText = 2.11.1
WSGIUtils = 0.7
erp5-coverage-plugin = 0.0.1
erp5diff = 0.8.1.8
five.formlib = 1.0.4
five.localsitemanager = 2.0.5
google-api-python-client = 1.6.1
httplib2 = 0.10.3
huBarcode = 1.0.0
interval = 1.0.0
ipdb = 0.10.2
logilab-common = 1.3.0
munnel = 0.3
nt-svcutils = 2.13.0
oauth2client = 4.0.0
oauthlib = 3.1.0
objgraph = 3.1.0
polib = 1.0.8
pprofile = 2.0.4
pyasn1-modules = 0.0.8
pycountry = 17.1.8
pycrypto = 2.6.1
pyflakes = 1.5.0
python-memcached = 1.58
pytracemalloc = 1.2
qrcode = 5.3
rsa = 3.4.2
spyne = 2.12.14
suds = 0.4
facebook-sdk = 2.0.0
urlnorm = 1.1.4
uuid = 1.30
validictory = 1.1.0
xfw = 0.10
xupdate-processor = 0.5
scikit-image = 0.14.0
PyWavelets = 0.5.2
networkx = 2.1
pytesseract = 0.2.2
zbarlight = 2.3
cloudpickle = 0.5.3
dask = 0.18.1
toolz = 0.9.0
zope.globalrequest = 1.5
waitress = 1.4.4
Products.ZSQLMethods = 2.13.5
fpconst = 0.7.2
graphviz = 0.5.2
olefile = 0.44
python-libmilter = 1.0.3
zope.app.debug = 3.4.1
zope.app.dependable = 3.5.1
zope.app.form = 4.0.2
mpmath = 0.19
openpyxl = 2.4.8
jdcal = 1.3
deepdiff = 3.3.0
unidiff = 0.5.5
jsonpickle = 0.9.6
responses = 0.10.6
cookies = 2.2.1
jedi = 0.15.1
parso = 0.5.1
yapf = 0.28.0
z3c.etestbrowser = 3.0.1
zope.testbrowser = 5.5.1
WSGIProxy2 = 0.4.6
WebTest = 2.0.33
beautifulsoup4 = 4.8.2
WebOb = 1.8.5
soupsieve = 1.9.5
eggtestinfo = 0.3
oic = 0.15.1
Beaker = 1.11.0
Mako = 1.1.4
pyjwkest = 1.4.2
alabaster = 0.7.12
future = 0.18.2
pycryptodomex = 3.10.1
strict-rfc3339 = 0.7
webcolors = 1.10
rfc3987 = 1.3.8
jsonpointer = 2.2
pandas = 0.19.2
numpy = 1.13.1
scipy = 0.19.0
ZConfig = 2.9.3
# Override configuration specified in component/ZODB
BTrees= 4.5.1
persistent= 4.6.4
zodbpickle= 2.0.0
# Override configuration specified in component/neoppod
zope.event= 3.5.2
zope.exceptions = 3.6.2
zope.testing = 3.9.7
[ZODB]
major = 4
[ZODB5]
egg-versions =
ZODB = 5.8.0
transaction = 2.4.0
# THIS IS NOT A BUILDOUT FILE, despite purposedly using a compatible syntax.
# The only allowed lines here are (regexes):
# - "^#" comments, copied verbatim
# - "^[" section beginings, copied verbatim
# - lines containing an "=" sign which must fit in the following categorie.
# - "^\s*filename\s*=\s*path\s*$" where "path" is relative to this file
# Copied verbatim.
# - "^\s*hashtype\s*=.*" where "hashtype" is one of the values supported
# by the re-generation script.
# Re-generated.
# - other lines are copied verbatim
# Substitution (${...:...}), extension ([buildout] extends = ...) and
# section inheritance (< = ...) are NOT supported (but you should really
# not need these here).
[mariadb-resiliency-after-import-script]
filename = instance-mariadb-resiliency-after-import-script.sh.in
md5sum = 85ce1e2f3d251aa435fef8118dca8a63
[mariadb-slow-query-report-script]
filename = mysql-querydigest.sh.in
md5sum = 6457ab192d709aa2c9014e9a3e91ca20
[mariadb-start-clone-from-backup]
filename = instance-mariadb-start-clone-from-backup.sh.in
md5sum = d10b8e35b02b5391cf46bf0c7dbb1196
[template-mariadb]
filename = instance-mariadb.cfg.in
md5sum = 93b2277185e4949a3d17be79d3710d2d
[template-kumofs]
filename = instance-kumofs.cfg.in
md5sum = 45cc45510b59ceb730b6e38448b5c0c3
[template-zope-conf]
filename = zope.conf.in
md5sum = ce8d03a1b4c1a9e5085ec54ea2744007
[site-zcml]
filename = site.zcml
md5sum = 43556e5bca8336dd543ae8068512aa6d
[template-my-cnf]
filename = my.cnf.in
md5sum = c0bde08ec6bd6d333315a15026266b65
[template-mariadb-initial-setup]
filename = mariadb_initial_setup.sql.in
md5sum = f928b9dc99f7f970caadfe7dd6f95d34
[template-postfix]
filename = instance-postfix.cfg.in
md5sum = 8f7bfca893a01c390df7a3dc9c2410e1
[template-postfix-master-cf]
filename = postfix_master.cf.in
md5sum = ef164517e3f7170d03499967d625c3bb
[template-postfix-main-cf]
filename = postfix_main.cf.in
md5sum = e9f03c66627beb4054d45123450162d2
[template-postfix-aliases]
filename = postfix_aliases.in
md5sum = 0969fbb25b05c02ef3c2d437b2f4e1a0
[template-run-zelenium]
filename = run-zelenium-test.py.in
md5sum = b95084ae9eed95a68eada45e28ef0c04
[template]
filename = instance.cfg.in
md5sum = 3f7b28085ceff321a3cb785db60f7c3e
[template-erp5]
filename = instance-erp5.cfg.in
md5sum = 30a1e738a8211887e75a5e75820e5872
[template-zeo]
filename = instance-zeo.cfg.in
md5sum = 0ba5735ab87ee53e2c203b1563b55ff0
[template-zodb-base]
filename = instance-zodb-base.cfg.in
md5sum = 0ac4b74436f554cd677f19275d18d880
[template-zope]
filename = instance-zope.cfg.in
md5sum = 0451190711157fc204418662126d5cf8
[template-balancer]
filename = instance-balancer.cfg.in
md5sum = b0751d3d12cfcc8934cb1027190f5e5e
[template-haproxy-cfg]
filename = haproxy.cfg.in
md5sum = 1645ef8990ab2b50f91a4c02f0cf8882
[template-rsyslogd-cfg]
filename = rsyslogd.cfg.in
md5sum = 5cf0316fdd17a940031e4083bbededd8
[instance-wcfs.cfg.in]
filename = instance-wcfs.cfg.in
md5sum = 0f921643a68e3a8de5529d653710ddca
{# This file configures haproxy to redirect requests from ports to specific urls.
# It provides TLS support for server and optionnaly for client.
#
# All parameters are given through the `parameter_dict` variable, see the
# list entries :
#
# parameter_dict = {
# # Path of the PID file. HAProxy will write its own PID to this file
# # Sending USR2 signal to this pid will cause haproxy to reload
# # its configuration.
# "pidfile": "<file_path>",
#
# # AF_UNIX socket for logs. Syslog must be listening on this socket.
# "log-socket": "<file_path>",
#
# # AF_UNIX socket for statistics and control.
# # Haproxy will listen on this socket.
# "stats-socket": "<file_path>",
#
# # IPv4 to listen on
# # All backends from `backend-dict` will listen on this IP.
# "ipv4": "0.0.0.0",
#
# # IPv6 to listen on
# # All backends from `backend-dict` will listen on this IP.
# "ipv6": "::1",
#
# # Certificate and key in PEM format. All ports will serve TLS using
# # this certificate.
# "cert": "<file_path>",
#
# # CA to verify client certificates in PEM format.
# # If set, client certificates will be verified with these CAs.
# # If not set, client certificates are not verified.
# "ca-cert": "<file_path>",
#
# # An optional CRL in PEM format (the file can contain multiple CRL)
# # This is required if ca-cert is passed.
# "crl": "<file_path>",
#
# # Path to use for HTTP health check on backends from `backend-dict`.
# "server-check-path": "/",
#
# # The mapping of backends, keyed by family name
# "backend-dict": {
# "family-secure": {
# ( 8000, # port int
# 'https', # proto str
# True, # ssl_required bool
# None, # timeout (in seconds) int | None
# [ # backends
# '10.0.0.10:8001', # netloc str
# 1, # max_connection_count int
# False, # is_web_dav bool
# ],
# ),
# },
# "family-default": {
# ( 8002, # port int
# 'https', # proto str
# False, # ssl_required bool
# None, # timeout (in seconds) int | None
# [ # backends
# '10.0.0.10:8003', # netloc str
# 1, # max_connection_count int
# False, # is_web_dav bool
# ],
# ),
# },
#
# # The mapping of zope paths.
# # This is a Zope specific feature.
# # `enable_authentication` has same meaning as for `backend-list`.
# "zope-virtualhost-monster-backend-dict": {
# # {(ip, port): ( enable_authentication, {frontend_path: ( internal_url ) }, ) }
# ('[::1]', 8004): (
# True, {
# 'zope-1': 'http://10.0.0.10:8001',
# 'zope-2': 'http://10.0.0.10:8002',
# },
# ),
# },
# }
#
# This sample of `parameter_dict` will make haproxy listening to :
# From to `backend-list`:
# For "family-secure":
# - 0.0.0.0:8000 redirecting internaly to http://10.0.0.10:8001 and
# - [::1]:8000 redirecting internaly to http://10.0.0.10:8001
# only accepting requests from clients providing a verified TLS certificate
# emitted by a CA from `ca-cert` and not revoked in `crl`.
# For "family-default":
# - 0.0.0.0:8002 redirecting internaly to http://10.0.0.10:8003
# - [::1]:8002 redirecting internaly to http://10.0.0.10:8003
# accepting requests from any client.
#
# For both families, X-Forwarded-For header will be stripped unless
# client presents a certificate that can be verified with `ca-cert` and `crl`.
#
# From zope-virtualhost-monster-backend-dict`:
# - [::1]:8004 with some path based rewrite-rules redirecting to:
# * http://10.0.0.10/8001 when path matches /zope-1(.*)
# * http://10.0.0.10/8002 when path matches /zope-2(.*)
# with some VirtualHostMonster rewrite rules so zope writes URLs with
# [::1]:8004 as server name.
# For more details, refer to
# https://docs.zope.org/zope2/zope2book/VirtualHosting.html#using-virtualhostroot-and-virtualhostbase-together
-#}
{% set server_check_path = parameter_dict['server-check-path'] -%}
global
maxconn 4096
master-worker
pidfile {{ parameter_dict['pidfile'] }}
# SSL configuration was generated with mozilla SSL Configuration Generator
# generated 2020-10-28, Mozilla Guideline v5.6, HAProxy 2.1, OpenSSL 1.1.1g, modern configuration
# https://ssl-config.mozilla.org/#server=haproxy&version=2.1&config=modern&openssl=1.1.1g&guideline=5.6
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options prefer-client-ciphers no-sslv3 no-tlsv10 no-tlsv11 no-tlsv12 no-tls-tickets
ssl-default-server-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tlsv12 no-tls-tickets
stats socket {{ parameter_dict['stats-socket'] }} level admin
defaults
mode http
retries 1
option redispatch
maxconn 2000
balance roundrobin
stats uri /haproxy
stats realm Global\ statistics
timeout connect 10s
timeout queue 60s
timeout client 305s
option http-server-close
# compress some content types
compression algo gzip
compression type application/font-woff application/font-woff2 application/hal+json application/javascript application/json application/rss+xml application/wasm application/x-font-opentype application/x-font-ttf application/x-javascript application/xml image/svg+xml text/cache-manifest text/css text/html text/javascript text/plain text/xml
log {{ parameter_dict['log-socket'] }} local0 info
{% set bind_ssl_crt = 'ssl crt ' ~ parameter_dict['cert'] ~ ' alpn h2,http/1.1' %}
{% set family_path_routing_dict = parameter_dict['family-path-routing-dict'] %}
{% set path_routing_list = parameter_dict['path-routing-list'] %}
{% for name, (port, _, certificate_authentication, timeout, backend_list) in sorted(six.iteritems(parameter_dict['backend-dict'])) -%}
listen family_{{ name }}
{%- if parameter_dict.get('ca-cert') -%}
{%- set ssl_auth = ' ca-file ' ~ parameter_dict['ca-cert'] ~ ' verify' ~ ( ' required' if certificate_authentication else ' optional' ) ~ ' crl-file ' ~ parameter_dict['crl'] %}
{%- else %}
{%- set ssl_auth = '' %}
{%- endif %}
bind {{ parameter_dict['ipv4'] }}:{{ port }} {{ bind_ssl_crt }} {{ ssl_auth }}
bind {{ parameter_dict['ipv6'] }}:{{ port }} {{ bind_ssl_crt }} {{ ssl_auth }}
cookie SERVERID rewrite
http-request set-header X-Balancer-Current-Cookie SERVERID
{% if timeout %}
{#
Apply a slightly longer timeout than the zope timeout so that clients can see the
TimeoutReachedError from zope, that is a bit more informative than the 504 error
page from haproxy.
#}
timeout server {{ timeout + 3 }}s
{%- endif %}
# remove X-Forwarded-For unless client presented a verified certificate
acl client_cert_verified ssl_c_used ssl_c_verify 0
http-request del-header X-Forwarded-For unless client_cert_verified
# set Remote-User if client presented a verified certificate
http-request del-header Remote-User
http-request set-header Remote-User %{+Q}[ssl_c_s_dn(cn)] if client_cert_verified
# logs
capture request header Referer len 512
capture request header User-Agent len 512
log-format "%{+Q}o %{-Q}ci - - [%trg] %r %ST %B %{+Q}[capture.req.hdr(0)] %{+Q}[capture.req.hdr(1)] %Ta"
{% for outer_prefix, inner_prefix in family_path_routing_dict.get(name, []) + path_routing_list %}
{% set outer_prefix = outer_prefix.strip('/') -%}
http-request replace-path ^(/+VirtualHostBase/+[^/]+/+[^/]+)/+VirtualHostRoot/+{% if outer_prefix %}{{ outer_prefix }}($|/.*){% else %}(.*){% endif %} \1/{{ inner_prefix.strip('/') }}/VirtualHostRoot/{% if outer_prefix %}_vh_{{ outer_prefix.replace('/', '/_vh_') }}{% endif %}\2
{% endfor %}
{% set has_webdav = [] -%}
{% for address, connection_count, webdav in backend_list -%}
{% if webdav %}{% do has_webdav.append(None) %}{% endif -%}
{% set server_name = name ~ '-' ~ loop.index0 %}
server {{ server_name }} {{ address }} cookie {{ server_name }} check inter 3s rise 1 fall 2 maxqueue 5 maxconn {{ connection_count }}
{%- endfor -%}
{%- if not has_webdav and server_check_path %}
option httpchk GET {{ server_check_path }}
{%- endif %}
{% endfor %}
{% for (ip, port), (_, backend_dict) in sorted(six.iteritems(parameter_dict['zope-virtualhost-monster-backend-dict'])) -%}
{% set group_name = 'testrunner_' ~ loop.index0 %}
frontend frontend_{{ group_name }}
bind {{ ip }}:{{ port }} {{ bind_ssl_crt }}
timeout client 8h
# logs
capture request header Referer len 512
capture request header User-Agent len 512
log-format "%{+Q}o %{-Q}ci - - [%trg] %r %ST %B %{+Q}[capture.req.hdr(0)] %{+Q}[capture.req.hdr(1)] %Tt"
{% for name in sorted(backend_dict.keys()) %}
use_backend backend_{{ group_name }}_{{ name }} if { path -m beg /{{ name }} }
{%- endfor %}
{% for name, url in sorted(backend_dict.items()) %}
backend backend_{{ group_name }}_{{ name }}
http-request replace-path ^/{{ name }}(.*) /VirtualHostBase/https/{{ ip }}:{{ port }}/VirtualHostRoot/_vh_{{ name }}\1
timeout server 8h
server {{ name }} {{ urllib_parse.urlparse(url).netloc }}
{%- endfor %}
{% endfor %}
{% import "caucase" as caucase with context %}
{% set part_list = [] -%}
{% macro section(name) %}{% do part_list.append(name) %}{{ name }}{% endmacro -%}
{% set ssl_parameter_dict = slapparameter_dict['ssl'] -%}
{% set frontend_caucase_url_list = ssl_parameter_dict.get('frontend-caucase-url-list', []) -%}
{#
XXX: This template only supports exactly one IPv4 and (if ipv6 is used) one IPv6
per partition. No more (undefined result), no less (IndexError).
-#}
{% set ipv4 = (ipv4_set | list)[0] -%}
{% if ipv6_set -%}
{% set ipv6 = (ipv6_set | list)[0] -%}
{% endif -%}
[jinja2-template-base]
recipe = slapos.recipe.template:jinja2
{{ caucase.updater(
prefix='caucase-updater',
buildout_bin_directory=parameter_dict['bin-directory'],
updater_path='${directory:services-on-watch}/caucase-updater',
url=ssl_parameter_dict['caucase-url'],
data_dir='${directory:srv}/caucase-updater',
crt_path='${apache-conf-ssl:caucase-cert}',
ca_path='${directory:srv}/caucase-updater/ca.crt',
crl_path='${directory:srv}/caucase-updater/crl.pem',
key_path='${apache-conf-ssl:caucase-key}',
on_renew='${haproxy-reload:output}',
max_sleep=ssl_parameter_dict.get('max-crl-update-delay', 1.0),
template_csr_pem=ssl_parameter_dict.get('csr'),
openssl=parameter_dict['openssl'] ~ '/bin/openssl',
)}}
{# XXX we don't use caucase yet.
{% do section('caucase-updater') -%}
{% do section('caucase-updater-promise') -%}
#}
{% set frontend_caucase_url_hash_list = [] -%}
{% for frontend_caucase_url in frontend_caucase_url_list -%}
{% set hash = hashlib.md5(six.ensure_binary(frontend_caucase_url)).hexdigest() -%}
{% do frontend_caucase_url_hash_list.append(hash) -%}
{% set data_dir = '${directory:client-cert-ca}/%s' % hash -%}
{{ caucase.updater(
prefix='caucase-updater-%s' % hash,
buildout_bin_directory=parameter_dict['bin-directory'],
updater_path='${directory:services-on-watch}/caucase-updater-%s' % hash,
url=frontend_caucase_url,
data_dir=data_dir,
ca_path='%s/ca.crt' % data_dir,
crl_path='%s/crl.pem' % data_dir,
on_renew='${caucase-updater-housekeeper:output}',
max_sleep=ssl_parameter_dict.get('max-crl-update-delay', 1.0),
openssl=parameter_dict['openssl'] ~ '/bin/openssl',
)}}
{% do section('caucase-updater-%s' % hash) -%}
{% endfor -%}
{% if frontend_caucase_url_hash_list -%}
[caucase-updater-housekeeper]
recipe = collective.recipe.template
output = ${directory:bin}/caucase-updater-housekeeper
mode = 700
input =
inline:
#!${buildout:executable}
import glob
import os
import subprocess
hash_list = {{ repr(frontend_caucase_url_hash_list) }}
crt_list = ['%s.crt' % e for e in hash_list]
for path in glob.glob('${haproxy-conf-ssl:ca-cert-dir}/*.crt'):
if os.path.basename(path) not in crt_list:
os.unlink(path)
crl_list = ['%s.crl' % e for e in hash_list]
for path in glob.glob('${haproxy-conf-ssl:crl-dir}/*.crl'):
if os.path.basename(path) not in crl_list:
os.unlink(path)
for hash in hash_list:
crt = '${directory:client-cert-ca}/%s/ca.crt' % hash
crt_link = '${haproxy-conf-ssl:ca-cert-dir}/%s.crt' % hash
crl = '${directory:client-cert-ca}/%s/crl.pem' % hash
crl_link = '${haproxy-conf-ssl:crl-dir}/%s.crl' % hash
if os.path.isfile(crt) and not os.path.islink(crt_link):
os.symlink(crt, crt_link)
if os.path.isfile(crl) and not os.path.islink(crl_link):
os.symlink(crl, crl_link)
subprocess.check_call(['{{ parameter_dict["openssl"] }}/bin/c_rehash', '${haproxy-conf-ssl:ca-cert-dir}'])
subprocess.check_call(['{{ parameter_dict["openssl"] }}/bin/c_rehash', '${haproxy-conf-ssl:crl-dir}'])
# assemble all CA and all CRLs in one file for haproxy
with open('${haproxy-conf-ssl:ca-cert}.tmp', 'w') as f:
for path in glob.glob('${haproxy-conf-ssl:ca-cert-dir}/*.crt'):
with open(path) as in_f:
f.write('#{}\n'.format(path))
f.write(in_f.read() + '\n')
with open('${haproxy-conf-ssl:crl}.tmp', 'w') as f:
for path in glob.glob('${haproxy-conf-ssl:crl-dir}/*.crl'):
with open(path) as in_f:
f.write('#{}\n'.format(path))
f.write(in_f.read() + '\n')
if os.path.exists('${haproxy-conf-ssl:ca-cert}'):
os.unlink('${haproxy-conf-ssl:ca-cert}')
if os.path.exists('${haproxy-conf-ssl:crl}'):
os.unlink('${haproxy-conf-ssl:crl}')
os.rename('${haproxy-conf-ssl:ca-cert}.tmp', '${haproxy-conf-ssl:ca-cert}')
os.rename('${haproxy-conf-ssl:crl}.tmp', '${haproxy-conf-ssl:crl}')
subprocess.check_call(['${haproxy-reload:output}'])
[caucase-updater-housekeeper-run]
recipe = plone.recipe.command
command = ${caucase-updater-housekeeper:output}
update-command = ${:command}
{% endif -%}
{% set haproxy_dict = {} -%}
{% set zope_virtualhost_monster_backend_dict = {} %}
{% set test_runner_url_dict = {} %} {# family_name => list of URLs #}
{% set next_port = functools.partial(next, itertools.count(slapparameter_dict['tcpv4-port'])) -%}
{% for family_name, parameter_id_list in sorted(
six.iteritems(slapparameter_dict['zope-family-dict'])) -%}
{% set zope_family_address_list = [] -%}
{% set ssl_authentication = slapparameter_dict['ssl-authentication-dict'][family_name] -%}
{% set has_webdav = [] -%}
{% for parameter_id in parameter_id_list -%}
{% set zope_address_list = slapparameter_dict[parameter_id] -%}
{% for zope_address, maxconn, webdav in zope_address_list -%}
{% if webdav -%}
{% do has_webdav.append(None) %}
{% endif -%}
{% set zope_effective_address = zope_address -%}
{% do zope_family_address_list.append((zope_effective_address, maxconn, webdav)) -%}
{% endfor -%}
{# # Generate entries with rewrite rule for test runnners #}
{% set test_runner_address_list = slapparameter_dict.get(parameter_id ~ '-test-runner-address-list', []) %}
{% if test_runner_address_list -%}
{% set test_runner_backend_mapping = {} %}
{% set test_runner_balancer_url_list = [] %}
{% set test_runner_external_port = next_port() %}
{% for i, (test_runner_internal_ip, test_runner_internal_port) in enumerate(test_runner_address_list) %}
{% do test_runner_backend_mapping.__setitem__(
'unit_test_' ~ i,
'http://' ~ test_runner_internal_ip ~ ':' ~ test_runner_internal_port ) %}
{% do test_runner_balancer_url_list.append(
'https://' ~ ipv4 ~ ':' ~ test_runner_external_port ~ '/unit_test_' ~ i ~ '/' ) %}
{% endfor %}
{% do zope_virtualhost_monster_backend_dict.__setitem__(
(ipv4, test_runner_external_port),
( ssl_authentication, test_runner_backend_mapping ) ) -%}
{% do test_runner_url_dict.__setitem__(family_name, test_runner_balancer_url_list) -%}
{% endif -%}
{% endfor -%}
{# Make rendering fail artificially if any family has no known backend.
# This is useful as haproxy's hot-reconfiguration mechanism is
# supervisord-incompatible.
# As jinja2 postpones KeyError until place-holder value is actually used,
# do a no-op getitem.
-#}
{% do zope_family_address_list[0][0] -%}
{#
# We use to have haproxy then apache, now haproxy is playing apache's role
# To keep port stable, we consume one port so that haproxy use the same port
# that apache was using before.
-#}
{% set _ = next_port() -%}
{% set haproxy_port = next_port() -%}
{% set backend_path = slapparameter_dict['backend-path-dict'][family_name] -%}
{% if has_webdav -%}
{% set external_scheme = 'webdavs' -%}
{% else %}
{% set external_scheme = 'https' -%}
{% endif -%}
{% do haproxy_dict.__setitem__(family_name, (haproxy_port, external_scheme, slapparameter_dict['ssl-authentication-dict'][family_name], slapparameter_dict['timeout-dict'][family_name], zope_family_address_list)) -%}
{% endfor -%}
[haproxy-cfg-parameter-dict]
ipv4 = {{ ipv4 }}
ipv6 = {{ ipv6 }}
cert = ${haproxy-conf-ssl:certificate}
{% if frontend_caucase_url_list -%}
ca-cert = ${haproxy-conf-ssl:ca-cert}
crl = ${haproxy-conf-ssl:crl}
{% endif %}
stats-socket = ${directory:run}/ha.sock
path-routing-list = {{ dumps(slapparameter_dict['path-routing-list']) }}
family-path-routing-dict = {{ dumps(slapparameter_dict['family-path-routing-dict']) }}
pidfile = ${directory:run}/haproxy.pid
log-socket = ${rsyslogd-cfg-parameter-dict:log-socket}
server-check-path = {{ dumps(slapparameter_dict['haproxy-server-check-path']) }}
backend-dict = {{ dumps(haproxy_dict) }}
zope-virtualhost-monster-backend-dict = {{ dumps(zope_virtualhost_monster_backend_dict) }}
[haproxy-cfg]
< = jinja2-template-base
url = {{ parameter_dict['template-haproxy-cfg'] }}
output = ${directory:etc}/haproxy.cfg
context =
section parameter_dict haproxy-cfg-parameter-dict
import urllib_parse six.moves.urllib.parse
extensions = jinja2.ext.do
[haproxy-reload]
recipe = collective.recipe.template
output = ${directory:bin}/${:_buildout_section_name_}
mode = 700
input =
inline:
#!/bin/sh
kill -USR2 $(cat "${haproxy-cfg-parameter-dict:pidfile}")
[{{ section('haproxy') }}]
recipe = slapos.cookbook:wrapper
wrapper-path = ${directory:services-on-watch}/haproxy
command-line = "{{ parameter_dict['haproxy'] }}/sbin/haproxy" -f "${haproxy-cfg:output}"
hash-files = ${haproxy-cfg:output}
[apache-conf-ssl]
# XXX caucase is/was buggy and this certificate does not match key for instances
# that were updated, so don't use it yet.
caucase-cert = ${directory:apache-conf}/apache-caucase.crt
caucase-key = ${directory:apache-conf}/apache-caucase.pem
[simplefile]
< = jinja2-template-base
inline = {{ '{{ content }}' }}
{% macro simplefile(section_name, file_path, content, mode='') -%}
{% set content_section_name = section_name ~ '-content' -%}
[{{ content_section_name }}]
content = {{ dumps(content) }}
[{{ section(section_name) }}]
< = simplefile
output = {{ file_path }}
context = key content {{content_section_name}}:content
mode = {{ mode }}
{%- endmacro %}
[{{ section('haproxy-socat-stats')}}]
recipe = slapos.cookbook:wrapper
wrapper-path = ${directory:bin}/${:_buildout_section_name_}
command-line = "{{ parameter_dict['socat'] }}/bin/socat" unix-connect:${haproxy-cfg-parameter-dict:stats-socket} stdio
[rsyslogd-cfg-parameter-dict]
log-socket = ${directory:run}/log.sock
access-log-file = ${directory:log}/apache-access.log
error-log-file = ${directory:log}/apache-error.log
pid-file = ${directory:run}/rsyslogd.pid
spool-directory = ${directory:rsyslogd-spool}
[rsyslogd-cfg]
<= jinja2-template-base
url = {{ parameter_dict['template-rsyslogd-cfg'] }}
output = ${directory:etc}/rsyslogd.conf
context = section parameter_dict rsyslogd-cfg-parameter-dict
[{{ section ('rsyslogd') }}]
recipe = slapos.cookbook:wrapper
command-line = {{ parameter_dict['rsyslogd'] }}/sbin/rsyslogd -i ${rsyslogd-cfg-parameter-dict:pid-file} -n -f ${rsyslogd-cfg:output}
wrapper-path = ${directory:services-on-watch}/rsyslogd
hash-existing-files = ${buildout:directory}/software_release/buildout.cfg
hash-files = ${rsyslogd-cfg:output}
[{{ section ('rsyslogd-listen-promise') }}]
<= monitor-promise-base
promise = check_command_execute
name = rsyslogd_listen_promise.py
config-command = test -S ${rsyslogd-cfg-parameter-dict:log-socket}
[haproxy-conf-ssl]
certificate = ${build-certificate-and-key:certificate-and-key}
{% if frontend_caucase_url_list -%}
ca-cert = ${directory:etc}/frontend-ca.pem
ca-cert-dir = ${directory:ca-cert}
crl = ${directory:etc}/frontend-crl.pem
crl-dir = ${directory:crl}
depends = ${caucase-updater-housekeeper-run:recipe}
{%- endif %}
[build-certificate-and-key]
{% if ssl_parameter_dict.get('key') -%}
certificate-and-key = ${tls-certificate-and-key-from-parameters:output}
{{ simplefile(
'tls-certificate-and-key-from-parameters',
'${directory:etc}/certificate-and-key-from-parameters.pem',
ssl_parameter_dict['cert'] ~ "\n" ~ ssl_parameter_dict['key']) }}
{% else %}
recipe = plone.recipe.command
command = "{{ parameter_dict['openssl'] }}/bin/openssl" req -newkey rsa -batch -new -x509 -days 3650 -nodes -keyout "${:certificate-and-key}" -out "${:certificate-and-key}"
certificate-and-key = ${directory:etc}/certificate-and-key-generated.pem
{%- endif %}
[{{ section('haproxy-promise') }}]
<= monitor-promise-base
# Check any haproxy port in ipv4, expect other ports and ipv6 to behave consistently
promise = check_socket_listening
name = haproxy.py
config-host = {{ ipv4 }}
config-port = {{ next(six.itervalues(haproxy_dict))[0] }}
[{{ section('publish') }}]
recipe = slapos.cookbook:publish.serialised
{% for family_name, (port, scheme, _, _, _) in haproxy_dict.items() -%}
{{ family_name ~ '-v6' }} = {% if ipv6_set %}{{ scheme ~ '://[' ~ ipv6 ~ ']:' ~ port }}{% endif %}
{{ family_name }} = {{ scheme ~ '://' ~ ipv4 ~ ':' ~ port }}
{% endfor -%}
{% for family_name, test_runner_url_list in test_runner_url_dict.items() -%}
{{ family_name ~ '-test-runner-url-list' }} = {{ dumps(test_runner_url_list) }}
{% endfor -%}
monitor-base-url = ${monitor-publish-parameters:monitor-base-url}
[{{ section('logrotate-rsyslogd') }}]
< = logrotate-entry-base
name = rsyslogd
log = ${rsyslogd-cfg-parameter-dict:access-log-file} ${rsyslogd-cfg-parameter-dict:error-log-file}
post = test ! -s ${rsyslogd-cfg-parameter-dict:pid-file} || kill -HUP $(cat ${rsyslogd-cfg-parameter-dict:pid-file})
[directory]
recipe = slapos.cookbook:mkdirectory
bin = ${buildout:directory}/bin
etc = ${buildout:directory}/etc
services = ${:etc}/run
services-on-watch = ${:etc}/service
var = ${buildout:directory}/var
run = ${:var}/run
log = ${:var}/log
srv = ${buildout:directory}/srv
apachedex = ${monitor-directory:private}/apachedex
rsyslogd-spool = ${:run}/rsyslogd-spool
{% if frontend_caucase_url_list -%}
ca-cert = ${:etc}/ssl.crt
crl = ${:etc}/ssl.crl
client-cert-ca = ${:srv}/client-cert-ca
{% endif -%}
[{{ section('resiliency-exclude-file') }}]
# Generate rdiff exclude file in case of resiliency
< = jinja2-template-base
inline = {{ '{{ "${directory:log}/**\\n" }}' }}
output = ${directory:srv}/exporter.exclude
[{{ section('monitor-generate-apachedex-report') }}]
recipe = slapos.cookbook:cron.d
cron-entries = ${cron:cron-entries}
name = generate-apachedex-report
# The goal is to be executed before logrotate log rotation.
# Here, logrotate-entry-base:frequency = daily, so we run at 23 o'clock every day.
frequency = 0 23 * * *
command = ${monitor-generate-apachedex-report-wrapper:wrapper-path}
[monitor-generate-apachedex-report-wrapper]
recipe = slapos.cookbook:wrapper
wrapper-path = ${directory:bin}/${:command}
command-line = "{{ parameter_dict['run-apachedex-location'] }}" "{{ parameter_dict['apachedex-location'] }}" "${directory:apachedex}" ${monitor-publish-parameters:monitor-base-url}/private/apachedex --apache-log-list "${apachedex-parameters:apache-log-list}" --configuration ${apachedex-parameters:configuration}
command = generate-apachedex-report
[monitor-apachedex-report-config]
recipe = slapos.recipe.template
output = ${directory:etc}/${:_buildout_section_name_}
inline =
{% for line in slapparameter_dict['apachedex-configuration'] %}
{# apachedex config files use shlex.split, so we need to quote the arguments. #}
{# BBB: in python 3 we can use shlex.quote instead. #}
{{ repr(line.encode('utf-8')) }}
{% endfor %}
[apachedex-parameters]
apache-log-list = ${rsyslogd-cfg-parameter-dict:access-log-file}
configuration = ${monitor-apachedex-report-config:output}
promise-threshold = {{ slapparameter_dict['apachedex-promise-threshold'] }}
[{{ section('monitor-promise-apachedex-result') }}]
<= monitor-promise-base
promise = check_command_execute
name = check-apachedex-result.py
config-command = "{{ parameter_dict['promise-check-apachedex-result'] }}" --apachedex_path "${directory:apachedex}" --status_file ${monitor-directory:private}/apachedex.report.json --threshold "${apachedex-parameters:promise-threshold}"
[{{ section('promise-check-computer-memory') }}]
<= monitor-promise-base
promise = check_command_execute
name = check-computer-memory.py
config-command = "{{ parameter_dict["check-computer-memory-binary"] }}" -db ${monitor-instance-parameter:collector-db} --threshold "{{ slapparameter_dict["computer-memory-percent-threshold"] }}" --unit percent
[monitor-instance-parameter]
monitor-httpd-ipv6 = {{ (ipv6_set | list)[0] }}
monitor-httpd-port = {{ next_port() }}
monitor-title = {{ slapparameter_dict['name'] }}
password = {{ slapparameter_dict['monitor-passwd'] }}
[buildout]
extends =
{{ template_monitor }}
parts +=
{{ part_list | join('\n ') }}
{% import "root_common" as root_common with context -%}
{% import "caucase" as caucase with context %}
{% set frontend_dict = slapparameter_dict.get('frontend', {}) -%}
{% set has_frontend = frontend_dict.get('software-url', '') != '' -%}
{% set site_id = slapparameter_dict.get('site-id', 'erp5') -%}
{% set inituser_login = slapparameter_dict.get('inituser-login', 'zope') -%}
{% set publish_dict = {'site-id': site_id, 'inituser-login': inituser_login} -%}
{% set has_posftix = slapparameter_dict.get('smtp', {}).get('postmaster') -%}
{% set jupyter_dict = slapparameter_dict.get('jupyter', {}) -%}
{% set has_jupyter = jupyter_dict.get('enable', jupyter_enable_default.lower() in ('true', 'yes')) -%}
{% set jupyter_zope_family = jupyter_dict.get('zope-family', '') -%}
{% set wcfs_dict = slapparameter_dict.get('wcfs', {}) -%}
{% set wcfs_enable = wcfs_dict.get('enable', wcfs_enable_default.lower() in ('true', 'yes')) -%}
{% set test_runner_enabled = slapparameter_dict.get('test-runner', {}).get('enabled', True) -%}
{% set test_runner_node_count = slapparameter_dict.get('test-runner', {}).get('node-count', 3) -%}
{% set test_runner_extra_database_count = slapparameter_dict.get('test-runner', {}).get('extra-database-count', 3) -%}
{% set test_runner_total_database_count = test_runner_node_count * (1 + test_runner_extra_database_count) -%}
{# Backward compatibility for mariadb.test-database-amount #}
{% set mariadb_test_database_amount = slapparameter_dict.get('mariadb', {}).get('test-database-amount') -%}
{% if mariadb_test_database_amount is not none -%}
{% set test_runner_total_database_count = mariadb_test_database_amount %}
{% set test_runner_enabled = mariadb_test_database_amount > 0 %}
{% endif -%}
{# Backward compatibility for cloudooo-url #}
{% if slapparameter_dict.get('cloudooo-url') -%}
{% do slapparameter_dict.setdefault('cloudooo-url-list', slapparameter_dict['cloudooo-url'].split(',')) %}
{% endif -%}
{% set test_runner_random_activity_priority = slapparameter_dict.get('test-runner', {}).get('random-activity-priority') -%}
{% set monitor_base_url_dict = {} -%}
{% set monitor_dict = slapparameter_dict.get('monitor', {}) %}
{% set use_ipv6 = slapparameter_dict.get('use-ipv6', False) -%}
{% set partition_thread_count_list = [] -%}
{% set zope_partition_dict = slapparameter_dict.get('zope-partition-dict', {'1': {}}) -%}
{% set zope_family_override_dict = slapparameter_dict.get('family-override', {}) -%}
{% for zope_parameter_dict in six.itervalues(zope_partition_dict) -%}
{# Apply some zope_parameter_dict default values, to avoid duplication. -#}
{% do zope_parameter_dict.setdefault('thread-amount', 4) -%}
{% do zope_parameter_dict.setdefault('instance-count', 1) -%}
{% do partition_thread_count_list.append(zope_parameter_dict['thread-amount'] * zope_parameter_dict['instance-count']) -%}
{% endfor -%}
[request-common]
<= request-common-base
config-use-ipv6 = {{ dumps(slapparameter_dict.get('use-ipv6', False)) }}
config-computer-memory-percent-threshold = {{ dumps(monitor_dict.get('computer-memory-percent-threshold', 80)) }}
{% set caucase_dict = slapparameter_dict.get('caucase', {}) -%}
{% set caucase_url = caucase_dict.get('url', '') -%}
{% macro request(name, software_type, config_key, config, ret={'url': True}, key_config={}) -%}
{% do config.update(slapparameter_dict.get(config_key, {})) -%}
{% set section = 'request-' ~ name -%}
[{{ section }}]
<= request-common
name = {{ name }}
software-type = {{ software_type }}
return = {{ ' '.join(ret) }}
{% for ret, publish in six.iteritems(ret) -%}
{% if publish -%}
{% do publish_dict.__setitem__(name ~ '-' ~ ret, '${' ~ section ~ ':connection-' ~ ret ~ '}') %}
{% endif -%}
{% if ret == "monitor-base-url" -%}
{% do monitor_base_url_dict.__setitem__(section, '${' ~ section ~ ':connection-' ~ ret ~ '}') -%}
{% endif -%}
{% endfor -%}
{{ root_common.sla(name) }}
{% for k, v in six.iteritems(config) -%}
config-{{ k }} = {{ dumps(v) }}
{% endfor -%}
{% for k, v in six.iteritems(key_config) -%}
config-{{ k }} = {{ '${' ~ v ~ '}' }}
{% endfor -%}
config-name = {{ name }}
{% endmacro -%}
[directory]
recipe = slapos.cookbook:mkdirectory
bin = ${buildout:directory}/bin
service-on-watch = ${buildout:directory}/etc/service
srv = ${buildout:directory}/srv
tmp = ${buildout:directory}/tmp
backup-caucased = ${:srv}/backup/caucased
{% if not caucase_url -%}
{% if use_ipv6 -%}
{% set caucase_host = '[' ~ (ipv6_set | list)[0] ~ ']' %}
{%- else -%}
{% set caucase_host = (ipv4_set | list)[0] %}
{%- endif %}
{% set caucase_port = caucase_dict.get('base-port', 8890) -%}
{% set caucase_netloc = caucase_host ~ ':' ~ caucase_port -%}
{% set caucase_url = 'http://' ~ caucase_netloc -%}
{{ caucase.caucased(
prefix='caucased',
buildout_bin_directory=bin_directory,
caucased_path='${directory:service-on-watch}/caucased',
backup_dir='${directory:backup-caucased}',
data_dir='${directory:srv}/caucased',
netloc=caucase_netloc,
tmp='${directory:tmp}',
service_auto_approve_count=caucase_dict.get('service-auto-approve-amount', 1),
user_auto_approve_count=caucase_dict.get('user-auto-approve-amount', 0),
key_len=caucase_dict.get('key-length', 2048),
)}}
{% do root_common.section('caucased') -%}
{% do root_common.section('caucased-promise') -%}
{% endif -%}
{% do publish_dict.__setitem__('caucase-http-url', caucase_url) -%}
{% set balancer_dict = slapparameter_dict.setdefault('balancer', {}) -%}
{% do balancer_dict.setdefault('ssl', {}).setdefault('caucase-url', caucase_url) -%}
{% do balancer_dict.setdefault('tcpv4-port', 2150) -%}
{% do balancer_dict.__setitem__('haproxy-server-check-path', balancer_dict.get('haproxy-server-check-path', '/') % {'site-id': site_id}) -%}
{% set routing_path_template_field_dict = {"site-id": site_id} -%}
{% macro expandRoutingPath(output, input) -%}
{% for outer_prefix, inner_prefix in input -%}
{% do output.append((outer_prefix, inner_prefix % routing_path_template_field_dict)) -%}
{% endfor -%}
{% endmacro -%}
{% set path_routing_list = [] -%}
{% do expandRoutingPath(
path_routing_list,
balancer_dict.get(
'path-routing-list',
(('/', '/'), ),
),
) -%}
{% do balancer_dict.__setitem__('path-routing-list', path_routing_list) -%}
{% set family_path_routing_dict = {} -%}
{% for name, family_path_routing_list in balancer_dict.get(
'family-path-routing-dict',
{},
).items() -%}
{% set path_routing_list = [] -%}
{% do expandRoutingPath(path_routing_list, family_path_routing_list) -%}
{% do family_path_routing_dict.__setitem__(name, path_routing_list) -%}
{% endfor -%}
{% do balancer_dict.__setitem__('family-path-routing-dict', family_path_routing_dict) -%}
{{ request('memcached-persistent', 'kumofs', 'kumofs', {'tcpv4-port': 2000}, {'url': True, 'monitor-base-url': False}, key_config={'monitor-passwd': 'monitor-htpasswd:passwd'}) }}
{{ request('memcached-volatile', 'kumofs', 'memcached', {'tcpv4-port': 2010, 'ram-storage-size': 64}, {'url': True, 'monitor-base-url': False}, key_config={'monitor-passwd': 'monitor-htpasswd:passwd'}) }}
{# Notes on max-connection-count: On a standard ERP5, each transaction
can have 4 connections to mariadb: activities, catalog, deferred and
transactionless. Count 5 to have some headroom. Multiply by the total
number of zope threads for all processes from all partitions to get the
expected number of connections. Add 50 for have some more zope-independent
headroom (automated probes, replication, ...).
-#}
{{ request('mariadb', 'mariadb', 'mariadb',
{
'tcpv4-port': 2099,
'max-slowqueries-threshold': monitor_dict.get('max-slowqueries-threshold', 1000),
'slowest-query-threshold': monitor_dict.get('slowest-query-threshold', ''),
'test-database-amount': test_runner_total_database_count,
'max-connection-count': sum(partition_thread_count_list) * 5 + 50,
},
{
'database-list': True,
'test-database-list': True,
'monitor-base-url': False,
},
key_config={'monitor-passwd': 'monitor-htpasswd:passwd'},
) }}
{% if has_posftix -%}
{{ request('smtp', 'postfix', 'smtp', {'tcpv4-port': 2025, 'smtpd-sasl-user': 'erp5@nowhere'}, key_config={'smtpd-sasl-password': 'publish-early:smtpd-sasl-password', 'monitor-passwd': 'monitor-htpasswd:passwd'}) }}
{%- else %}
[request-smtp]
# Placeholder smtp service URL
connection-url = smtp://127.0.0.2:0/
{%- endif %}
{# ZODB -#}
{% set zodb_dict = {} -%}
{% set storage_dict = {} -%}
{% set mountpoints = set() -%}
{% for zodb in slapparameter_dict.get('zodb') or ({'type': 'zeo', 'server': {}},) -%}
{% do mountpoints.add(zodb.setdefault('mount-point', '/')) -%}
{% set name = zodb.pop('name', 'root') -%}
{% do assert(name not in zodb_dict, name, zodb_dict) -%}
{% do zodb_dict.__setitem__(name, zodb) -%}
{% if 'server' in zodb -%}
{% do storage_dict.setdefault(zodb['type'], {}).__setitem__(name, zodb.pop('server')) -%}
{% endif -%}
{% endfor -%}
{% do assert(len(mountpoints) == len(zodb_dict)) -%}
{% set neo = [] -%}
{% for server_type, server_dict in six.iteritems(storage_dict) -%}
{% if server_type == 'neo' -%}
{% set ((name, server_dict),) = server_dict.items() -%}
{% do neo.append(server_dict.get('cluster')) -%}
{% do server_dict.update(cluster='${publish-early:neo-cluster}') -%}
{{ root_common.request_neo(server_dict, 'zodb-neo', 'neo-', monitor_base_url_dict) }}
{% set client_dict = zodb_dict[name].setdefault('storage-dict', {}) -%}
{% for k in 'ssl', '_ca', '_cert', '_key' -%}
{% do k in server_dict and client_dict.setdefault(k, server_dict[k]) -%}
{% endfor -%}
{% else -%}
{{ assert(server_type == 'zeo', server_type) -}}
{# BBB: for compatibility, keep 'zodb' as partition_reference for ZEO -#}
{{ request('zodb', 'zodb-' ~ server_type, 'zodb-' ~ server_type, {'tcpv4-port': 2100, 'zodb-dict': server_dict}, dict.fromkeys(('storage-dict', 'tidstorage-ip', 'tidstorage-port', 'monitor-base-url')), key_config={'monitor-passwd': 'monitor-htpasswd:passwd'}) }}
{% endif -%}
{% endfor -%}
[request-zodb-base]
config-zodb-dict = {{ dumps(zodb_dict) }}
{% for server_type, server_dict in six.iteritems(storage_dict) -%}
{% if server_type == 'neo' -%}
config-neo-cluster = ${publish-early:neo-cluster}
config-neo-name = {{ server_dict.keys()[0] }}
config-neo-masters = ${publish-early:neo-masters}
{% else -%}
config-zodb-zeo = ${request-zodb:connection-storage-dict}
config-tidstorage-ip = ${request-zodb:connection-tidstorage-ip}
config-tidstorage-port = ${request-zodb:connection-tidstorage-port}
{% endif -%}
{% endfor -%}
{% set zope_address_list_id_dict = {} -%}
{% if zope_partition_dict -%}
[request-zope-base]
<= request-common
request-zodb-base
return =
zope-address-list
hosts-dict
monitor-base-url
software-release-url
{%- if test_runner_enabled %}
test-runner-address-list
{% endif %}
{% set bt5_default_list = [
'erp5_full_text_mroonga_catalog',
'erp5_configurator_standard',
'erp5_configurator_maxma_demo',
'erp5_configurator_run_my_doc',
] -%}
{% if has_jupyter -%}
{% do bt5_default_list.append('erp5_data_notebook') -%}
{% endif -%}
config-bt5 = {{ dumps(slapparameter_dict.get('bt5', ' '.join(bt5_default_list))) }}
config-bt5-repository-url = {{ dumps(slapparameter_dict.get('bt5-repository-url', local_bt5_repository)) }}
config-cloudooo-url-list = {{ dumps(slapparameter_dict.get('cloudooo-url-list', default_cloudooo_url_list)) }}
config-caucase-url = {{ dumps(caucase_url) }}
config-deadlock-debugger-password = ${publish-early:deadlock-debugger-password}
config-developer-list = {{ dumps(slapparameter_dict.get('developer-list', [inituser_login])) }}
config-selenium-server-configuration-dict = {{ dumps(slapparameter_dict.get('selenium-server-configuration-dict', {})) }}
config-hosts-dict = {{ dumps(slapparameter_dict.get('hosts-dict', {})) }}
config-computer-hosts-dict = {{ dumps(slapparameter_dict.get('computer-hosts-dict', {})) }}
config-hostalias-dict = {{ dumps(slapparameter_dict.get('hostalias-dict', {})) }}
config-id-store-interval = {{ dumps(slapparameter_dict.get('id-store-interval')) }}
config-zope-longrequest-logger-error-threshold = {{ dumps(monitor_dict.get('zope-longrequest-logger-error-threshold', 20)) }}
config-zope-longrequest-logger-maximum-delay = {{ dumps(monitor_dict.get('zope-longrequest-logger-maximum-delay', 0)) }}
config-inituser-login = {{ dumps(inituser_login) }}
config-inituser-password = ${publish-early:inituser-password}
config-kumofs-url = ${request-memcached-persistent:connection-url}
config-memcached-url = ${request-memcached-volatile:connection-url}
config-monitor-passwd = ${monitor-htpasswd:passwd}
config-mysql-test-url-list = ${request-mariadb:connection-test-database-list}
config-mysql-url-list = ${request-mariadb:connection-database-list}
config-site-id = {{ dumps(site_id) }}
config-smtp-url = ${request-smtp:connection-url}
config-timezone = {{ dumps(slapparameter_dict.get('timezone', 'UTC')) }}
config-cloudooo-retry-count = {{ slapparameter_dict.get('cloudooo-retry-count', 2) }}
config-wendelin-core-zblk-fmt = {{ dumps(slapparameter_dict.get('wendelin-core-zblk-fmt', '')) }}
config-wsgi = {{ dumps(slapparameter_dict.get('wsgi', True)) }}
config-test-runner-enabled = {{ dumps(test_runner_enabled) }}
config-test-runner-node-count = {{ dumps(test_runner_node_count) }}
config-test-runner-random-activity-priority = {{ dumps(test_runner_random_activity_priority) }}
config-wcfs_enable = {{ dumps(wcfs_enable) }}
config-test-runner-configuration = {{ dumps(slapparameter_dict.get('test-runner', {})) }}
software-type = zope
{% set global_publisher_timeout = slapparameter_dict.get('publisher-timeout', 300) -%}
{% set global_activity_timeout = slapparameter_dict.get('activity-timeout') -%}
{% set zope_family_dict = {} -%}
{% set zope_family_name_list = [] -%}
{% set zope_backend_path_dict = {} -%}
{% set ssl_authentication_dict = {} -%}
{% set balancer_timeout_dict = {} -%}
{% set jupyter_zope_family_default = [] -%}
{% for custom_name, zope_parameter_dict in six.iteritems(zope_partition_dict) -%}
{% set partition_name = 'zope-' ~ custom_name -%}
{% set section_name = 'request-' ~ partition_name -%}
{% set promise_software_url_section_name = 'promise-software-url' ~ partition_name -%}
{% set zope_family = zope_parameter_dict.get('family', 'default') -%}
{% do zope_family_name_list.append(zope_family) %}
{% set backend_path = zope_parameter_dict.get('backend-path', '') % {'site-id': site_id} %}
{# # default jupyter zope family is first zope family. -#}
{# # use list.append() to update it, because in jinja2 set changes only local scope. -#}
{% if not jupyter_zope_family_default -%}
{% do jupyter_zope_family_default.append(zope_family) -%}
{% endif -%}
{% do zope_family_dict.setdefault(zope_family, []).append(section_name) -%}
{% do zope_backend_path_dict.__setitem__(zope_family, backend_path) -%}
{% do ssl_authentication_dict.__setitem__(zope_family, zope_parameter_dict.get('ssl-authentication', False)) -%}
{% set current_zope_family_override_dict = zope_family_override_dict.get(zope_family, {}) -%}
{% do balancer_timeout_dict.__setitem__(zope_family, current_zope_family_override_dict.get('publisher-timeout', global_publisher_timeout)) -%}
[{{ section_name }}]
<= request-zope-base
name = {{ partition_name }}
{% do monitor_base_url_dict.__setitem__(section_name, '${' ~ section_name ~ ':connection-monitor-base-url}') -%}
{{ root_common.sla(partition_name) }}
config-name = {{ dumps(custom_name) }}
config-instance-count = {{ dumps(zope_parameter_dict['instance-count']) }}
config-private-dev-shm = {{ zope_parameter_dict.get('private-dev-shm', '') }}
config-thread-amount = {{ dumps(zope_parameter_dict['thread-amount']) }}
config-timerserver-interval = {{ dumps(zope_parameter_dict.get('timerserver-interval', 1)) }}
config-longrequest-logger-interval = {{ dumps(zope_parameter_dict.get('longrequest-logger-interval', -1)) }}
config-longrequest-logger-timeout = {{ dumps(zope_parameter_dict.get('longrequest-logger-timeout', 1)) }}
config-large-file-threshold = {{ dumps(zope_parameter_dict.get('large-file-threshold', "10MB")) }}
config-port-base = {{ dumps(zope_parameter_dict.get('port-base', 2200)) }}
{# BBB: zope_parameter_dict used to contain 'webdav', so fallback to it -#}
config-webdav = {{ dumps(current_zope_family_override_dict.get('webdav', zope_parameter_dict.get('webdav', False))) }}
config-publisher-timeout = {{ dumps(current_zope_family_override_dict.get('publisher-timeout', global_publisher_timeout)) }}
config-activity-timeout = {{ dumps(current_zope_family_override_dict.get('activity-timeout', global_activity_timeout)) }}
{% if test_runner_enabled -%}
config-test-runner-apache-url-list = ${publish-early:{{ zope_family }}-test-runner-url-list}
[{{ promise_software_url_section_name }}]
# Promise to wait for zope partition to use the expected software URL,
# used on upgrades.
recipe = slapos.cookbook:check_parameter
value = {{ '${' ~ section_name ~ ':connection-software-release-url}' }}
expected-not-value =
expected-value = ${slap-connection:software-release-url}
path = ${directory:bin}/${:_buildout_section_name_}
{% do root_common.section(promise_software_url_section_name) -%}
{% endif -%}
{% endfor -%}
{# if not explicitly configured, connect jupyter to first zope family, which -#}
{# will be 'default' if zope families are not configured also -#}
{% if not jupyter_zope_family and jupyter_zope_family_default -%}
{% set jupyter_zope_family = jupyter_zope_family_default[0] -%}
{% endif -%}
{# We need to concatenate lists that we cannot read as lists, so this gets hairy. -#}
{% set zope_family_parameter_dict = {} -%}
{% for family_name, zope_section_id_list in zope_family_dict.items() -%}
{% for zope_section_id in zope_section_id_list -%}
{% set parameter_name = 'zope-family-entry-' ~ zope_section_id -%}
{% do zope_address_list_id_dict.__setitem__(zope_section_id, parameter_name) -%}
{% do zope_family_parameter_dict.setdefault(family_name, []).append(parameter_name) -%}
{% endfor -%}
{% if has_frontend -%}
{% set frontend_name = 'frontend-' ~ family_name -%}
{% do publish_dict.__setitem__('family-' ~ family_name, '${' ~ frontend_name ~ ':connection-site_url}' ) -%}
[{{ frontend_name }}]
<= request-frontend-base
name = {{ frontend_name }}
config-url = ${request-balancer:connection-{{ family_name }}-v6}
{% else -%}
{% do publish_dict.__setitem__('family-' ~ family_name, '${request-balancer:connection-' ~ family_name ~ '}' ) -%}
{% do publish_dict.__setitem__('family-' ~ family_name ~ '-v6', '${request-balancer:connection-' ~ family_name ~ '-v6}' ) -%}
{% endif -%}
{% endfor -%}
{% if has_jupyter -%}
{# request jupyter connected to balancer of proper zope family -#}
{{ request('jupyter', 'jupyter', 'jupyter', {}, key_config={'erp5-url': 'request-balancer:connection-' ~ jupyter_zope_family}) }}
{% if has_frontend -%}
[frontend-jupyter]
<= request-frontend-base
name = frontend-jupyter
config-url = ${request-jupyter:connection-url}
{# # override jupyter-url in publish_dict with frontend address -#}
{% do publish_dict.__setitem__('jupyter-url', '${frontend-jupyter:connection-site_url}') -%}
{% endif -%}
{%- endif %}
{% if wcfs_enable -%}
{# request WCFS connected to ZODB -#}
{% do root_common.section('request-wcfs') -%}
{{ request('wcfs', 'wcfs', 'wcfs', {}, {}) }}
[request-wcfs]
<= request-common
request-zodb-base
{%- endif %}
{% set balancer_ret_dict = {'monitor-base-url': False} -%}
{% for family in zope_family_dict -%}
{% do balancer_ret_dict.__setitem__(family, False) -%}
{% do balancer_ret_dict.__setitem__(family + '-v6', False) -%}
{% if test_runner_enabled -%}
{% do balancer_ret_dict.__setitem__(family + '-test-runner-url-list', False) -%}
{% endif -%}
{% endfor -%}
{% set balancer_key_config_dict = {
'monitor-passwd': 'monitor-htpasswd:passwd',
} -%}
{% for zope_section_id, name in zope_address_list_id_dict.items() -%}
{% do balancer_key_config_dict.__setitem__(
name,
zope_section_id + ':connection-zope-address-list',
) -%}
{% if test_runner_enabled -%}
{% do balancer_key_config_dict.__setitem__(
name + '-test-runner-address-list',
zope_section_id + ':connection-test-runner-address-list',
) -%}
{% endif -%}
{% endfor -%}
{{ request(
name='balancer',
software_type='balancer',
config_key='balancer',
config={
'zope-family-dict': zope_family_parameter_dict,
'backend-path-dict': zope_backend_path_dict,
'ssl-authentication-dict': ssl_authentication_dict,
'timeout-dict': balancer_timeout_dict,
'apachedex-promise-threshold': monitor_dict.get('apachedex-promise-threshold', 70),
'apachedex-configuration': monitor_dict.get(
'apachedex-configuration',
[
'--logformat', '%h %l %u %t "%r" %>s %O "%{Referer}i" "%{User-Agent}i" %{ms}T',
'--erp5-base', '+erp5', '.*/VirtualHostRoot/erp5(/|\\?|$)',
'--base', '+other', '/',
'--skip-user-agent', 'Zabbix',
'--error-detail',
'--js-embed',
'--quiet',
],
),
},
ret=balancer_ret_dict,
key_config=balancer_key_config_dict,
) }}
[request-frontend-base]
{% if has_frontend -%}
<= request-common
recipe = slapos.cookbook:request
software-url = {{ dumps(frontend_dict['software-url']) }}
software-type = {{ dumps(frontend_dict.get('software-type', 'RootSoftwareInstance')) }}
{{ root_common.sla('frontend', True) }}
shared = true
{% set config_dict = {
'type': 'zope',
} -%}
{% if frontend_dict.get('domain') -%}
{% do config_dict.__setitem__('custom_domain', frontend_dict['domain']) -%}
{% endif -%}
{% if frontend_dict.get('virtualhostroot-http-port') -%}
{% do config_dict.__setitem__('virtualhostroot-http-port', frontend_dict['virtualhostroot-http-port']) -%}
{% endif -%}
{% if frontend_dict.get('virtualhostroot-https-port') -%}
{% do config_dict.__setitem__('virtualhostroot-https-port', frontend_dict['virtualhostroot-https-port']) -%}
{% endif -%}
{% for name, value in config_dict.items() -%}
config-{{ name }} = {{ value }}
{% endfor -%}
return = site_url
{% endif -%}
{% endif -%}{# if zope_partition_dict -#}
[publish]
<= monitor-publish
recipe = slapos.cookbook:publish.serialised
-extends = publish-early
{% if zope_address_list_id_dict -%}
{#
Pick any published hosts-dict, they are expected to be identical - and there is
no way to check here.
-#}
hosts-dict = {{ '${' ~ next(iter(zope_address_list_id_dict)) ~ ':connection-hosts-dict}' }}
{% endif -%}
{% for name, value in publish_dict.items() -%}
{{ name }} = {{ value }}
{% endfor -%}
{% if test_runner_enabled -%}
{% for zope_family_name in zope_family_name_list -%}
{{ zope_family_name }}-test-runner-url-list = ${request-balancer:connection-{{ zope_family_name }}-test-runner-url-list}
{% endfor -%}
{% endif -%}
[publish-early]
recipe = slapos.cookbook:publish-early
-init =
inituser-password gen-password:passwd
deadlock-debugger-password gen-deadlock-debugger-password:passwd
{%- if has_posftix %}
smtpd-sasl-password gen-smtpd-sasl-password:passwd
{%- endif %}
{%- if test_runner_enabled %}
{%- for zope_family_name in zope_family_name_list %}
{{ zope_family_name }}-test-runner-url-list default-balancer-test-runner-url-list:{{ zope_family_name }}
{%- endfor %}
{%- endif %}
{%- if neo %}
neo-cluster gen-neo-cluster:name
neo-admins neo-cluster:admins
neo-masters neo-cluster:masters
{%- if neo[0] %}
neo-cluster = {{ dumps(neo[0]) }}
{%- endif %}
{%- endif %}
{%- set inituser_password = slapparameter_dict.get('inituser-password') %}
{%- if inituser_password %}
inituser-password = {{ dumps(inituser_password) }}
{%- endif %}
{%- set deadlock_debugger_password = slapparameter_dict.get('deadlock-debugger-password') -%}
{%- if deadlock_debugger_password %}
deadlock-debugger-password = {{ dumps(deadlock_debugger_password) }}
{% endif %}
{%- if test_runner_enabled %}
[default-balancer-test-runner-url-list]
recipe =
{%- for zope_family_name in zope_family_name_list %}
{{ zope_family_name }} = not-ready
{%- endfor %}
{%- endif %}
[gen-password]
recipe = slapos.cookbook:generate.password
storage-path =
[gen-deadlock-debugger-password]
<= gen-password
[gen-neo-cluster-base]
<= gen-password
[gen-neo-cluster]
name = neo-${gen-neo-cluster-base:passwd}
[gen-smtpd-sasl-password]
< = gen-password
{{ root_common.common_section() }}
[monitor-conf-parameters]
monitor-title = ERP5
password = ${monitor-htpasswd:passwd}
[monitor-base-url-dict]
{% for key, value in monitor_base_url_dict.items() -%}
{{ key }} = {{ value }}
{% endfor %}
{% set use_ipv6 = slapparameter_dict.get('use-ipv6', False) -%}
[buildout]
extends =
{{ template_monitor }}
parts +=
publish
kumofs-instance
logrotate-entry-kumofs
resiliency-exclude-file
promise-kumofs-server
promise-kumofs-server-listen
promise-kumofs-gateway
promise-kumofs-manager
promise-check-computer-memory
[publish]
recipe = slapos.cookbook:publish.serialised
{% if use_ipv6 -%}
url = memcached://[${kumofs-instance:ip}]:${kumofs-instance:gateway-port}/
{% else -%}
url = memcached://${kumofs-instance:ip}:${kumofs-instance:gateway-port}/
{% endif -%}
monitor-base-url = ${monitor-publish-parameters:monitor-base-url}
[kumofs-instance]
recipe = slapos.cookbook:generic.kumofs
# Network options
{% if use_ipv6 -%}
ip = {{ (ipv6_set | list)[0] }}
address-family = inet6
{% else -%}
ip = {{ (ipv4_set | list)[0] }}
address-family = inet4
{% endif -%}
{% set tcpv4_port = slapparameter_dict['tcpv4-port'] -%}
manager-port = {{ tcpv4_port }}
server-port = {{ tcpv4_port + 1 }}
server-listen-port = {{ tcpv4_port + 2 }}
gateway-port = {{ tcpv4_port + 3 }}
# Paths: Data
{% set ram_storage_size = slapparameter_dict.get('ram-storage-size') -%}
{% if ram_storage_size -%}
data-path = *#capsiz={{ ram_storage_size }}m
{% else -%}
# (with 10M buckets and HDBTLARGE option)
data-path = ${directory:kumofs-data}/kumodb.tch#bnum=10485760#opts=l
{% endif -%}
# Paths: Running wrappers
gateway-wrapper = ${directory:services}/kumofs_gateway
manager-wrapper = ${directory:services}/kumofs_manager
server-wrapper = ${directory:services}/kumofs_server
# Paths: Logs
kumo-gateway-log = ${directory:log}/kumo-gateway.log
kumo-manager-log = ${directory:log}/kumo-manager.log
kumo-server-log = ${directory:log}/kumo-server.log
# Binary information
kumo-gateway-binary = {{ parameter_dict['kumo-location'] }}/bin/kumo-gateway
kumo-manager-binary = {{ parameter_dict['kumo-location'] }}/bin/kumo-manager
kumo-server-binary = {{ parameter_dict['kumo-location'] }}/bin/kumo-server
shell-path = {{ parameter_dict['dash-location'] }}/bin/dash
[logrotate-entry-kumofs]
< = logrotate-entry-base
name = kumofs
log = ${kumofs-instance:kumo-gateway-log} ${kumofs-instance:kumo-manager-log} ${kumofs-instance:kumo-server-log}
[directory]
recipe = slapos.cookbook:mkdirectory
log = ${buildout:directory}/var/log
services = ${buildout:directory}/etc/run
plugin = ${buildout:directory}/etc/plugin
srv = ${buildout:directory}/srv
kumofs-data = ${:srv}/kumofs
[resiliency-exclude-file]
# Generate rdiff exclude file in case of resiliency
recipe = slapos.recipe.template:jinja2
inline = {{ '{{ "**\\n" }}' }}
output = ${directory:srv}/exporter.exclude
# Deploy zope promises scripts
[promise-template]
<= monitor-promise-base
promise = check_socket_listening
config-host = ${kumofs-instance:ip}
config-port = ${kumofs-instance:server-listen-port}
[promise-kumofs-server]
<= promise-template
name = kumofs-server.py
config-port = ${kumofs-instance:server-port}
[promise-kumofs-server-listen]
<= promise-template
name = kumofs-server-listen.py
config-port = ${kumofs-instance:server-listen-port}
[promise-kumofs-gateway]
<= promise-template
name = kumofs-gateway.py
config-port = ${kumofs-instance:gateway-port}
[promise-kumofs-manager]
<= promise-template
name = kumofs-manager.py
config-port = ${kumofs-instance:manager-port}
[promise-check-computer-memory]
<= monitor-promise-base
promise = check_command_execute
name = check-computer-memory.py
config-command = "{{ parameter_dict["check-computer-memory-binary"] }}" -db ${monitor-instance-parameter:collector-db} --threshold "{{ slapparameter_dict["computer-memory-percent-threshold"] }}" --unit percent
[monitor-instance-parameter]
monitor-httpd-ipv6 = {{ (ipv6_set | list)[0] }}
monitor-httpd-port = {{ tcpv4_port + 4 }}
monitor-title = {{ slapparameter_dict['name'] }}
password = {{ slapparameter_dict['monitor-passwd'] }}
#!{{ dash }}
# DO NOT RUN THIS SCRIPT ON PRODUCTION INSTANCE
# OR MYSQL DATA WILL BE ERASED.
# This script will import the dump of the mysql database to the real
# database. It is launched by the clone (importer) instance of webrunner
# in the end of the import script.
# Depending on the output, it will create a file containing
# the status of the restoration (success or failure)
set -e
mysql_executable='{{ mysql_executable }}'
mariadb_data_directory='{{ mariadb_data_directory }}'
mariadb_backup_directory='{{ mariadb_backup_directory }}'
pid_file='{{ pid_file }}'
binlog_path='{{ binlog_path }}'
server_executable='{{ server_executable }}'
# Make sure mariadb is not already running
if [ -e "$pid_file" ]; then
if ! pid=$(cat "$pid_file"); then
echo "Cannot read Mariadb pidfile, assuming running. Aborting."
exit 1
fi
if kill -0 "$pid"; then
echo "Mariadb is already running with pid $pid. Aborting."
exit 1
fi
fi
echo "Deleting existing database..."
find "$mariadb_data_directory" -mindepth 1 -delete
# $binlog_path can be empty if incremental_backup_retention_days <= -1
if [ -n "$binlog_path" ]; then
new_binlog_directory="$(dirname "$binlog_path")"
binlog_index_file="$new_binlog_directory/binlog.index"
if [ -e "$binlog_index_file" ]; then
echo "Adapting binlog database to new paths..."
old_binlog_directory="$(dirname $(head -n 1 $binlog_index_file))"
sed -e "s|$old_binlog_directory|$new_binlog_directory|g" $binlog_index_file > $binlog_index_file
fi
fi
echo "Starting mariadb..."
"$server_executable" --innodb-flush-method=nosync --skip-innodb-doublewrite --innodb-flush-log-at-trx-commit=0 --sync-frm=0 --slow-query-log=0 --skip-log-bin &
mysqld_pid=$!
trap "kill $mysqld_pid" EXIT TERM INT
sleep 30
# If mysql has stopped, abort
if ! [ -d /proc/$mysql_pid ]; then
echo "mysqld exited, aborting."
exit 1
fi
echo "Importing data..."
# Use latest dump XXX can contain funny characters
dump=$(ls -r "$mariadb_backup_directory" | head -1)
zcat "$mariadb_backup_directory/$dump" | $mysql_executable || {
RESTORE_EXIT_CODE=$?
echo 'Backup restoration failed.'
exit $RESTORE_EXIT_CODE
}
echo 'Backup restoration successfully completed.'
#!{{ dash }}
set -eu
if [ $# -ne 7 ]; then
echo "Bootstrap a mariadb instance from available backup data so it replicates from given master."
echo " $0 <BACKUP FILE> <MASTER HOST> <MASTER PORT> <MASTER USER> <MASTER SSL CA FILE> <MASTER SSL CERT FILE> <MASTER SSL KEY FILE>"
exit 1
fi
BACKUP=$1
MASTER_HOST=$2
MASTER_PORT=$3
MASTER_USER=$4
MASTER_SSL_CA=$5
MASTER_SSL_CERT=$6
MASTER_SSL_KEY=$7
CLIENT='{{ client }}'
DATA_DIRECTORY='{{ data_directory }}'
PID_FILE='{{ pid_file }}'
SERVER='{{ server }}'
UPDATE='{{ update }}'
SOCKET='{{ socket }}'
# Make sure mariadb is not already running
if [ -e "$PID_FILE" ]; then
PID=$(cat "$PID_FILE")
if [ $? -ne 0 ]; then
echo "Cannot read Mariadb pidfile, assuming running. Aborting."
exit 1
fi
if kill -0 "$PID"; then
echo "Mariadb is already running with pid $PID. Aborting."
exit 1
fi
fi
BACKUP_HEAD="$(zcat "$BACKUP" | head -n 100)"
SQL_CHANGE_MASTER=$(echo "$BACKUP_HEAD" | grep "^--\s*CHANGE MASTER TO " | sed "s/^--\s*//")
if [ -z "$SQL_CHANGE_MASTER" ]; then
echo "'CHANGE MASTER TO' statement not found in given backup file."
echo "Is replication enabled on future master ?"
exit 1
fi
SQL_SET_GTID="$(echo "$BACKUP_HEAD" | grep "^--\s*SET GLOBAL gtid_slave_pos=" | sed "s/^--\s*//")"
if [ -z "$SQL_SET_GTID" ]; then
echo "Info: GTID not found in backup, it will not be enabled."
MASTER_USE_GTID=0
else
echo "Info: GTID found in backup, it will be enabled."
MASTER_USE_GTID=1
fi
echo "EXISTING DATABASE CONTENT WILL BE DESTROYED"
echo "You have 5 seconds to interrupt this script..."
if sleep 5; then
echo "Expired, proceeding"
else
echo "Interrupted, aborting"
exit 1
fi
echo "Emptying data directory..."
find "$DATA_DIRECTORY" -mindepth 1 -delete
echo -n "Starting mariadb for backup restoration"
"$SERVER" --innodb-flush-method=nosync --skip-innodb-doublewrite --innodb-flush-log-at-trx-commit=0 --sync-frm=0 --slow-query-log=0 --skip-log-bin &
PID=$!
trap "kill $PID; wait; exit 1" EXIT
while true; do
if [ ! -e "/proc/$PID" ]; then
trap EXIT
echo "Service exited, check logs"
wait
exit 1
fi
test -e "$SOCKET" && break
echo -n .
sleep 0.5
done
"$UPDATE"
echo "Importing $BACKUP ..."
zcat "$BACKUP" | "$CLIENT"
echo "Configuring server as slave..."
if [ "$MASTER_USE_GTID" -eq 1 ]; then
"$CLIENT" -e "$SQL_SET_GTID"
MASTER_USE_GTID_SQL="slave_pos"
else
MASTER_USE_GTID_SQL="NO"
fi
"$CLIENT" -e "
CHANGE MASTER TO
MASTER_HOST='$MASTER_HOST',
MASTER_USER='$MASTER_USER',
MASTER_PORT=$MASTER_PORT,
MASTER_SSL=1,
MASTER_SSL_CA='$MASTER_SSL_CA',
MASTER_SSL_CERT='$MASTER_SSL_CERT',
MASTER_SSL_KEY='$MASTER_SSL_KEY',
MASTER_SSL_VERIFY_SERVER_CERT=1,
MASTER_USE_GTID=$MASTER_USE_GTID_SQL;
"
if [ "$MASTER_USE_GTID" -eq 0 ]; then
# No GTID, use binlog name & offset as provided by backup file.
# Example: CHANGE MASTER TO MASTER_LOG_FILE='binlog.003447', MASTER_LOG_POS=360;
# Notes:
# - Must happen after setting MASTER_HOST & MASTER_PORT.
# - Implicitly sets MASTER_USE_GTID=NO if it was set before.
"$CLIENT" -e "$SQL_CHANGE_MASTER"
fi
"$CLIENT" -e "START SLAVE;"
echo "Stopping mariadb..."
trap EXIT
kill $PID
wait
echo "Done. Start mariadb normally. You may use 'show slave status' SQL command to monitor progress."
{% set part_list = [] -%}
{% macro section(name) %}{% do part_list.append(name) %}{{ name }}{% endmacro -%}
{% set use_ipv6 = slapparameter_dict.get('use-ipv6', False) -%}
{% set database_list = slapparameter_dict.get('database-list', [{'name': 'erp5', 'user': 'user', 'password': 'insecure', 'with-process-privilege': True}]) -%}
{% set test_database_list = [] %}
{% for database_count in range(slapparameter_dict.get('test-database-amount', 1)) -%}
{% do test_database_list.append({'name': 'erp5_test_' ~ database_count, 'user': 'testuser_' ~ database_count, 'password': 'testpassword' ~ database_count}) -%}
{% endfor -%}
{% set catalog_backup = slapparameter_dict.get('catalog-backup', {}) -%}
{% set backup_periodicity = slapparameter_dict.get('backup-periodicity', 'daily') -%}
{% set full_backup_retention_days = catalog_backup.get('full-retention-days', 7) -%}
{% set incremental_backup_retention_days = catalog_backup.get('incremental-retention-days', full_backup_retention_days) -%}
{% set port = slapparameter_dict['tcpv4-port'] %}
{% if use_ipv6 -%}
{% set ip = (ipv6_set | list)[0] -%}
{% else -%}
{% set ip = (ipv4_set | list)[0] -%}
{% endif -%}
{% set dash = parameter_dict['dash-location'] ~ '/bin/dash' %}
[{{ section('publish') }}]
recipe = slapos.cookbook:publish.serialised
-extends = publish-early
{% macro render_database_list(database_list) -%}
{% set publish_database_list = [] -%}
{% for database in database_list -%}
{% if database.get('user') -%}
{% do publish_database_list.append("mysql://" ~ database['user'] ~ ":" ~ database['password'] ~ "@" ~ ip ~ ":" ~ port ~ "/" ~ database['name']) -%}
{% else -%}
{% do publish_database_list.append("mysql://" ~ ip ~ ":" ~ port ~ "/" ~ database['name']) -%}
{% endif -%}
{% endfor -%}
{{ dumps(publish_database_list) }}
{% endmacro -%}
database-list = {{ render_database_list(database_list) }}
test-database-list = {{ render_database_list(test_database_list) }}
monitor-base-url = ${monitor-publish-parameters:monitor-base-url}
[publish-early]
recipe = slapos.cookbook:publish-early
-init =
server-id gen-server-id:value
{%- set server_id = slapparameter_dict.get('server-id') %}
{%- if server_id %}
server-id = {{ dumps(server_id) }}
{%- endif %}
[gen-server-id]
recipe = slapos.cookbook:random.integer
minimum = {{ dumps(1) }}
maximum = {{ dumps(2**32 - 1) }}
[simplefile]
recipe = slapos.recipe.template:jinja2
inline = {{ '{{ content }}' }}
{% macro simplefile(section_name, file_path, content, mode='') -%}
{% set content_section_name = section_name ~ '-content' -%}
[{{ content_section_name }}]
content = {{ dumps(content) }}
[{{ section(section_name) }}]
< = simplefile
output = {{ file_path }}
context = key content {{content_section_name}}:content
mode = {{ mode }}
{%- endmacro %}
{% set ssl_dict = {} -%}
{% macro sslfile(key, content, mode='644') -%}
{% set path = '${directory:mariadb-ssl}/' ~ key ~ '.pem' -%}
{% do ssl_dict.__setitem__(key, path) -%}
{{ simplefile('ssl-file-' ~ key, path, content, mode) }}
{%- endmacro %}
{% set ssl_parameter_dict = slapparameter_dict.get('ssl') -%}
{% if ssl_parameter_dict -%}
{% set base_directory = '${directory:mariadb-ssl}/' -%}
{# Note: The key content will be stored in .installed.cfg, and this template's
rendering, so the only point of mode is to avoid risking mariadb complaining
about laxist file mode. -#}
{{ sslfile('key', ssl_parameter_dict['key'], mode='600') }}
{{ sslfile('crt', ssl_parameter_dict['crt']) }}
{% if 'ca-crt' in ssl_parameter_dict -%}
{{ sslfile('ca-crt', ssl_parameter_dict['ca-crt']) }}
{% endif -%}
{% if 'crl' in ssl_parameter_dict -%}
{{ sslfile('crl', ssl_parameter_dict['crl']) }}
{% endif -%}
{%- endif %}
{% if full_backup_retention_days > -1 -%}
[{{ section('cron-entry-mariadb-backup') }}]
recipe = slapos.cookbook:cron.d
cron-entries = ${cron:cron-entries}
name = mariadb-backup
time = {{ dumps(backup_periodicity) }}
{# When binlogs are enabled:
# flush-logs: used so no manipulation on binlogs is needed to restore from
# full + binlogs. The first binlog after a dump starts from dump snapshot and
# can be fully restored.
# master-data: use value "2" as we are not in a replication case
#}
command = "${binary-wrap-mysqldump:wrapper-path}" --all-databases --flush-privileges --single-transaction --max-allowed-packet=128M {% if incremental_backup_retention_days > -1 %}--flush-logs --master-data=2 {% endif %}| {{ parameter_dict['gzip-location'] }}/bin/gzip > "${directory:mariadb-backup-full}/$({{ parameter_dict['coreutils-location'] }}/bin/date "+%Y%m%d%H%M%S").sql.gz"
{# KEEP GLOB PATTERN IN SYNC with generated filenames above
# YYYYmmddHHMMSS -#}
file-glob = ??????????????.sql.gz
{% if full_backup_retention_days > 0 -%}
[{{ section("cron-entry-mariadb-backup-expire") }}]
recipe = slapos.cookbook:cron.d
cron-entries = ${cron:cron-entries}
name = mariadb-backup-expire
time = {{ dumps(backup_periodicity) }}
command = {{ parameter_dict['findutils-location'] }}/bin/find "${directory:mariadb-backup-full}" -maxdepth 1 -name "${cron-entry-mariadb-backup:file-glob}" -daystart -mtime +{{ full_backup_retention_days }} -delete
{%- endif %}
{%- endif %}
[my-cnf-parameters]
ip = {{ ip }}
port = {{ port }}
socket = ${directory:run}/mariadb.sock
data-directory = ${directory:srv}/mariadb
tmp-directory = ${directory:tmp}
etc-directory = ${directory:etc}
plugin-directory = {{ dumps(parameter_dict['mroonga-mariadb-plugin-dir']) }}
groonga-plugins-path = {{ parameter_dict['groonga-plugins-path'] }}
pid-file = ${directory:run}/mariadb.pid
error-log = ${directory:log}/mariadb_error.log
slow-query-log = ${directory:log}/mariadb_slowquery.log
long-query-time = {{ dumps(slapparameter_dict.get('long-query-time', 1)) }}
max-connection-count = {{ dumps(slapparameter_dict.get('max-connection-count', 1000)) }}
innodb-buffer-pool-size = {{ dumps(slapparameter_dict.get('innodb-buffer-pool-size', 0)) }}
innodb-buffer-pool-instances = {{ dumps(slapparameter_dict.get('innodb-buffer-pool-instances', 0)) }}
innodb-log-file-size = {{ dumps(slapparameter_dict.get('innodb-log-file-size', 0)) }}
innodb-file-per-table = {{ dumps(slapparameter_dict.get('innodb-file-per-table', 0)) }}
innodb-log-buffer-size = {{ dumps(slapparameter_dict.get('innodb-log-buffer-size', 0)) }}
relaxed-writes = {{ dumps(slapparameter_dict.get('relaxed-writes', False)) }}
character-set-server = {{ dumps(slapparameter_dict.get('character-set-server', 'utf8mb4')) }}
collation-server = {{ dumps(slapparameter_dict.get('collation-server', 'utf8mb4_general_ci')) }}
{% if incremental_backup_retention_days > -1 -%}
binlog-path = ${directory:mariadb-backup-incremental}/binlog
# XXX: binlog rotation happens along with other log's rotation
binlog-expire-days = {{ dumps(incremental_backup_retention_days) }}
server-id = ${publish-early:server-id}
{% else %}
binlog-path =
{%- endif %}
{%- for key, value in ssl_dict.items() -%}
ssl-{{ key }} = {{ value }}
{% endfor %}
[my-cnf]
recipe = slapos.recipe.template:jinja2
output = ${directory:etc}/mariadb.cnf
url = {{ parameter_dict['template-my-cnf'] }}
context = section parameter_dict my-cnf-parameters
[init-script-parameters]
database-list = {{ dumps(database_list + test_database_list) }}
mroonga-mariadb-install-sql = {{ dumps(parameter_dict['mroonga-mariadb-install-sql']) }}
[init-script]
recipe = slapos.recipe.template:jinja2
# XXX: is there a better location ?
output = ${directory:etc}/mariadb_initial_setup.sql
url = {{ parameter_dict['template-mariadb-initial-setup'] }}
context = section parameter_dict init-script-parameters
[{{ section('update-mysql') }}]
recipe = slapos.cookbook:generic.mysql.wrap_update_mysql
output = ${directory:services}/mariadb_update
binary = ${binary-wrap-mysql_upgrade:wrapper-path}
mysql = ${binary-wrap-mysql:wrapper-path}
init-script = ${init-script:output}
mysql_tzinfo_to_sql = ${binary-wrap-mysql_tzinfo_to_sql:wrapper-path}
[{{ section('mysqld') }}]
recipe = slapos.recipe.template:jinja2
output = ${directory:services}/mariadb
url = {{ parameter_dict['template-mysqld-wrapper'] }}
context =
key defaults_file my-cnf:output
key datadir my-cnf-parameters:data-directory
key environ :environ
environ =
GRN_PLUGINS_PATH='${my-cnf-parameters:groonga-plugins-path}'
ODBCSYSINI='${my-cnf-parameters:etc-directory}'
LD_LIBRARY_PATH=$${LD_LIBRARY_PATH:+$LD_LIBRARY_PATH:}'{{ parameter_dict['unixodbc-location'] }}/lib'
{%- for variable in slapparameter_dict.get('environment-variables', ()) %}
{{ variable }}
{%- endfor %}
[{{ section('odbc-ini') }}]
recipe = slapos.recipe.template
output = ${directory:etc}/odbc.ini
inline = {{ dumps(slapparameter_dict.get('odbc-ini', ' ')) }}
[{{ section('logrotate-entry-mariadb') }}]
< = logrotate-entry-base
name = mariadb
log = ${my-cnf-parameters:error-log} ${my-cnf-parameters:slow-query-log}
post = "${binary-wrap-mysql:wrapper-path}" -B -e "FLUSH LOGS"
[{{ section('binary-link') }}]
recipe = slapos.cookbook:symbolic.link
target-directory = ${directory:bin}
link-binary = {{ dumps(parameter_dict['link-binary']) }}
[binary-wrap-base]
recipe = slapos.cookbook:wrapper
# Note: --defaults-file must be the first argument, otherwise wrapped binary
# will reject it.
command-line = "{{ parameter_dict['mariadb-location'] }}/bin/${:command}" --defaults-file="${my-cnf:output}"
wrapper-path = ${directory:bin}/${:command}
[binary-wrap-mysql]
< = binary-wrap-base
command = mysql
[binary-wrap-mysqldump]
< = binary-wrap-base
command = mysqldump
[binary-wrap-mysql_upgrade]
< = binary-wrap-base
command = mysql_upgrade
[binary-wrap-mysql_tzinfo_to_sql]
< = binary-wrap-base
command-line = "{{ parameter_dict['mariadb-location'] }}/bin/${:command}"
command = mysql_tzinfo_to_sql
[binary-wrap-pt-digest]
<= binary-wrap-base
command-line = "{{ parameter_dict['percona-tools-location'] }}/bin/${:command}"
command = pt-query-digest
[directory]
recipe = slapos.cookbook:mkdirectory
bin = ${buildout:directory}/bin
etc = ${buildout:directory}/etc
services = ${:etc}/run
plugin = ${:etc}/plugin
srv = ${buildout:directory}/srv
tmp = ${buildout:directory}/tmp
backup = ${:srv}/backup
mariadb-backup-full = ${:backup}/mariadb-full
mariadb-backup-incremental = ${:backup}/mariadb-incremental
mariadb-ssl = ${:etc}/mariadb-ssl
var = ${buildout:directory}/var
log = ${:var}/log
run = ${:var}/run
slowquery = ${monitor-directory:private}/slowquery_digest
[{{ section('resiliency-exclude-file') }}]
# Generate rdiff exclude file in case of resiliency
recipe = slapos.recipe.template:jinja2
inline = {{ '{{ "${my-cnf-parameters:data-directory}/**\\n${directory:mariadb-backup-incremental}/**\\n${directory:log}/**\\n${directory:tmp}/**\\n" }}' }}
output = ${directory:srv}/exporter.exclude
[{{ section("resiliency-identity-signature-script")}}]
# Generate identity script used by webrunner to check data integrity
recipe = slapos.cookbook:wrapper
command-line = {{ bin_directory }}/backup-identity-script-excluding-path --exclude-path "srv/backup/logrotate/**"
wrapper-path = ${directory:srv}/.backup_identity_script
mode = 770
[dash]
dash = {{ dumps(dash) }}
[{{ section('start-clone-from-backup') }}]
recipe = slapos.recipe.template:jinja2
url = {{ parameter_dict['mariadb-start-clone-from-backup'] }}
output = ${directory:bin}/start-clone-from-backup
context =
key dash dash:dash
key client binary-wrap-mysql:wrapper-path
key data_directory my-cnf-parameters:data-directory
key pid_file my-cnf-parameters:pid-file
key server mysqld:output
key update update-mysql:output
key socket my-cnf-parameters:socket
[{{ section('resiliency-after-import-script') }}]
# Generate after import script used by importer instance of webrunner
recipe = slapos.recipe.template:jinja2
url = {{ parameter_dict['mariadb-resiliency-after-import-script'] }}
output = ${directory:bin}/restore-from-backup
context =
key dash dash:dash
key mysql_executable binary-wrap-mysql:wrapper-path
key mariadb_data_directory my-cnf-parameters:data-directory
key mariadb_backup_directory directory:mariadb-backup-full
key pid_file my-cnf-parameters:pid-file
key binlog_path my-cnf-parameters:binlog-path
key server_executable mysqld:output
[{{ section('monitor-generate-mariadb-slow-query-report') }}]
recipe = slapos.cookbook:cron.d
cron-entries = ${cron:cron-entries}
name = generate-mariadb-slow-query-report
# The goal is to be executed before logrotate log rotation.
# Here, logrotate-entry-base:frequency = daily, so we run at 23 o'clock every day.
frequency = 0 23 * * *
command = ${monitor-generate-mariadb-slow-query-report-wrapper:output}
[monitor-generate-mariadb-slow-query-report-wrapper]
recipe = slapos.recipe.template:jinja2
url = {{ parameter_dict['mariadb-slow-query-report-script'] }}
output = ${directory:bin}/${:filename}
filename = generate-mariadb-slow-query-report
context =
raw slow_query_path ${directory:srv}/backup/logrotate/mariadb_slowquery.log
raw pt_query_exec ${binary-wrap-pt-digest:wrapper-path}
raw dash {{ parameter_dict['dash-location'] }}/bin/dash
raw xz {{ parameter_dict['xz-utils-location'] }}/bin/xz
key output_folder directory:slowquery
{%if slapparameter_dict.get('max-slowqueries-threshold') and slapparameter_dict.get('slowest-query-threshold') %}
[{{ section('monitor-promise-slowquery-result') }}]
<= monitor-promise-base
promise = check_command_execute
name = check-slow-query-pt-digest-result.py
config-command = "{{ parameter_dict['promise-check-slow-queries-digest-result'] }}" --ptdigest_path "${directory:slowquery}" --status_file ${monitor-directory:private}/mariadb_slow_query.report.json --max_queries_threshold "${:max_queries_threshold}" --slowest_query_threshold "${:slowest_queries_threshold}"
max_queries_threshold = {{ slapparameter_dict['max-slowqueries-threshold'] }}
slowest_queries_threshold = {{ slapparameter_dict['slowest-query-threshold'] }}
{%-endif%}
[{{ section('promise-check-computer-memory') }}]
<= monitor-promise-base
promise = check_command_execute
name = check-computer-memory.py
config-command = "{{ parameter_dict["check-computer-memory-binary"] }}" -db ${monitor-instance-parameter:collector-db} --threshold "{{ slapparameter_dict["computer-memory-percent-threshold"] }}" --unit percent
[{{ section('promise') }}]
<= monitor-promise-base
promise = check_command_execute
name = mariadb.py
config-command = "${binary-wrap-mysql:wrapper-path}" --execute ';' {% if database_list and database_list[0].get('user') %} --host="${my-cnf-parameters:ip}" --port="${my-cnf-parameters:port}" --user="{{ database_list[0]['user'] }}" --password="{{ database_list[0]['password'] }}" {% endif %}
[monitor-instance-parameter]
monitor-httpd-ipv6 = {{ (ipv6_set | list)[0] }}
monitor-httpd-port = {{ port + 1 }}
monitor-title = {{ slapparameter_dict['name'] }}
password = {{ slapparameter_dict['monitor-passwd'] }}
[buildout]
extends =
{{ template_monitor }}
parts +=
{{ part_list | join('\n ') }}
{% set part_list = [] -%}
{% macro section(name) %}{% do part_list.append(name) %}{{ name }}{% endmacro -%}
{% if slapparameter_dict['use-ipv6'] -%}
{% set ip = '[' ~ (ipv6_set | list)[0] ~ ']' -%}
{% else -%}
{% set ip = (ipv4_set | list)[0] -%}
{% endif -%}
{% set tcpv4_port = slapparameter_dict['tcpv4-port'] -%}
{% set relay = slapparameter_dict.get('relay', {}) -%}
{% set divert = slapparameter_dict.get('divert', []) -%}
{% set alias_dict = slapparameter_dict.get('alias-dict', {}) -%}
{% do alias_dict.setdefault('postmaster', [slapparameter_dict['postmaster']]) -%}
{% set smtpd_sasl_user = slapparameter_dict['smtpd-sasl-user'] -%}
{% set smtpd_sasl_password = slapparameter_dict['smtpd-sasl-password'] -%}
{% set milter_list = [] %}
[jinja2-template-base]
recipe = slapos.recipe.template:jinja2
[smtpd-password]
recipe = slapos.cookbook:generate.password
storage-path =
[{{ section('publish') }}]
recipe = slapos.cookbook:publish.serialised
url = {{ dumps('smtp://' ~ urllib.quote_plus(smtpd_sasl_user) ~ ':' ~ urllib.quote_plus(smtpd_sasl_password) ~ '@' ~ ip ~ ':' ~ tcpv4_port) }}
[directory]
recipe = slapos.cookbook:mkdirectory
etc = ${buildout:directory}/etc
plugin = ${:etc}/plugin
etc-postfix = ${:etc}/postfix
etc-cyrus = ${:etc}/cyrus
run = ${:etc}/run
bin = ${buildout:directory}/bin
usr = ${buildout:directory}/usr
srv = ${buildout:directory}/srv
var = ${buildout:directory}/var
var-log = ${:var}/log
var-lib = ${:var}/lib
var-lib-postfix = ${:var-lib}/postfix
var-spool = ${:var}/spool
var-spool-postfix = ${:var-spool}/postfix
# Not used at buildout level, presence needed by postfix.
var-spool-postfix-active = ${:var-spool-postfix}/active
var-spool-postfix-bounce = ${:var-spool-postfix}/bounce
var-spool-postfix-corrupt = ${:var-spool-postfix}/corrupt
var-spool-postfix-defer = ${:var-spool-postfix}/defer
var-spool-postfix-deferred = ${:var-spool-postfix}/deferred
var-spool-postfix-flush = ${:var-spool-postfix}/flush
var-spool-postfix-hold = ${:var-spool-postfix}/hold
var-spool-postfix-incoming = ${:var-spool-postfix}/incoming
var-spool-postfix-maildrop = ${:var-spool-postfix}/maildrop
var-spool-postfix-pid = ${:var-spool-postfix}/pid
var-spool-postfix-private = ${:var-spool-postfix}/private
var-spool-postfix-public = ${:var-spool-postfix}/public
var-spool-postfix-saved = ${:var-spool-postfix}/saved
var-spool-postfix-trace = ${:var-spool-postfix}/trace
# Used for ERP5 resiliency or (more probably)
# webrunner resiliency with erp5 inside.
[{{ section("resiliency-exclude-file") }}]
# Generate rdiff exclude file
recipe = slapos.recipe.template
inline = {{ '{{ "**\\n" }}' }}
output = ${directory:srv}/exporter.exclude
{% if divert -%}
{% set milter_port = tcpv4_port + 1 -%}
{% set socket = 'inet:' ~ ip ~ ':' ~ milter_port -%}
[{{ section('divert-milter') }}]
recipe = slapos.cookbook:wrapper
command-line =
'{{ parameter_dict['buildout-bin-directory'] }}/munnel'
--listen '{{ socket }}'
-- {{ ' '.join(divert) }}
wrapper-path = ${directory:run}/munnel
{% do milter_list.append(socket) -%}
[{{ section('munnel-promise') }}]
<= monitor-promise-base
promise = check_socket_listening
name = munnel.py
config-host = {{ ip }}
config-port = {{ milter_port }}
{% endif -%}
[configuration]
smtp = {{ dumps(tcpv4_port) }}
inet-interfaces = {{ dumps(ip) }}
alias-dict = {{ dumps(alias_dict) }}
relayhost = {{ dumps(relay.get('host')) }}
relay-sasl-credential = {{ dumps(relay.get('sasl-credential')) }}
cyrus-sasldb = ${directory:etc-cyrus}/postfix.gdbm
milter-list = {{ dumps(milter_list) }}
xz-utils-location = {{ dumps(parameter_dict['xz-utils-location']) }}
[userinfo]
recipe = slapos.cookbook:userinfo
[smtp-sasl-passwd]
< = jinja2-template-base
output = ${directory:etc-postfix}/sasl_passwd
{% if relay -%}
inline = {{ "{{ host }} {{ sasl_credential }}" }}
{%- else -%}
inline =
{%- endif %}
context =
key host configuration:relayhost
key sasl_credential configuration:relay-sasl-credential
[{{ section('cyrus-smtpd-conf') }}]
< = jinja2-template-base
output = ${directory:etc-cyrus}/smtpd.conf
inline =
pwcheck_method: auxprop
mech_list: PLAIN LOGIN
sasldb_path: {{ '{{ sasldb }}' }}
context =
key sasldb configuration:cyrus-sasldb
[{{ section('cyrus-smtpd-password') }}]
recipe = plone.recipe.command
stop-on-error = true
command =
rm -f '${configuration:cyrus-sasldb}' &&
echo '{{ smtpd_sasl_password }}' | '${wrapper-postfix-saslpasswd2:wrapper-path}' -pc '{{ smtpd_sasl_user }}'
update-command = ${:command}
[smtpd-ssl]
recipe = plone.recipe.command
stop-on-error = true
openssl = '{{ parameter_dict['openssl'] }}/bin/openssl'
cert = ${directory:etc-postfix}/smtpd.crt
key = ${directory:etc-postfix}/smtpd.pem
dh-512 = ${directory:etc-postfix}/dh512.pem
dh-2048 = ${directory:etc-postfix}/dh2048.pem
command =
${:openssl} dhparam -out '${:dh-512}' 512 &&
${:openssl} dhparam -out '${:dh-2048}' 2048 &&
${:update}
update =
${:openssl} req -newkey rsa -batch -new -x509 -days 3650 -nodes -keyout '${:key}' -out '${:cert}'
[{{ section('postfix-logrotate') }}]
recipe = slapos.cookbook:cron.d
cron-entries = ${cron:cron-entries}
name = postfix-logrotate
frequency = 0 0 * * *
command = ${directory:bin}/postfix logrotate
[postfix-main-cf-parameter]
postfix-location = {{ parameter_dict['postfix-location'] }}
[{{ section('postfix-main-cf') }}]
< = jinja2-template-base
output = ${directory:etc-postfix}/main.cf
url = {{ parameter_dict['template-postfix-main-cf'] }}
context =
key bin_directory directory:bin
key usr_directory directory:usr
key queue_directory directory:var-spool-postfix
key data_directory directory:var-lib-postfix
key spool_directory directory:var-spool
key mail_owner userinfo:pw-name
key setgid_group userinfo:gr-name
key inet_interfaces configuration:inet-interfaces
key relayhost configuration:relayhost
key sasl_passwd typed-paths:smtp-sasl-passwd
key aliases typed-paths:aliases
key milter_list configuration:milter-list
key cyrus_directory directory:etc-cyrus
key cert smtpd-ssl:cert
key key smtpd-ssl:key
key dh_512 smtpd-ssl:dh-512
key dh_2048 smtpd-ssl:dh-2048
key log_directory directory:var-log
key xz_utils_location configuration:xz-utils-location
key postfix_location postfix-main-cf-parameter:postfix-location
key etc_postfix directory:etc-postfix
[{{ section('postfix-master-cf') }}]
< = jinja2-template-base
output = ${directory:etc-postfix}/master.cf
url = {{ parameter_dict['template-postfix-master-cf'] }}
context = key smtp configuration:smtp
[aliases]
< = jinja2-template-base
url = {{ parameter_dict['template-postfix-aliases'] }}
output = ${directory:etc-postfix}/aliases
context =
key alias_dict configuration:alias-dict
[typed-paths]
# Postfix-friendly rendering of file paths, prefixed with database type.
aliases = hash:${aliases:output}
smtp-sasl-passwd = hash:${smtp-sasl-passwd:output}
[{{ section('postalias-db') }}]
recipe = plone.recipe.command
stop-on-error = true
command = '${wrapper-postalias:wrapper-path}' '${typed-paths:aliases}' '${typed-paths:smtp-sasl-passwd}'
update-command = ${:command}
[wrapper-postfix-saslpasswd2]
recipe = slapos.cookbook:wrapper
command-line = '{{ parameter_dict['cyrus-sasl-location'] }}/sbin/saslpasswd2' -f '${configuration:cyrus-sasldb}'
wrapper-path = ${directory:bin}/saslpasswd2
[base-wrapper]
recipe = slapos.cookbook:wrapper
environment =
MAIL_CONFIG=${directory:etc-postfix}
SASL_CONF_PATH=${directory:etc-cyrus}
[base-bin-wrapper]
< = base-wrapper
command-line = ${:path}/${:basename}
wrapper-path = ${directory:bin}/${:basename}
[base-bin-bin-wrapper]
< = base-bin-wrapper
path = {{ parameter_dict['postfix-location'] }}/usr/bin
[base-sbin-bin-wrapper]
< = base-bin-wrapper
path = {{ parameter_dict['postfix-location'] }}/usr/sbin
{% for extend, basename_list in (
(
'base-bin-bin-wrapper',
(
'mailq',
'newaliases',
),
),
(
'base-sbin-bin-wrapper',
(
'postalias',
'postcat',
'postconf',
'postdrop',
'postfix',
'postkick',
'postlock',
'postlog',
'postmap',
'postmulti',
'postqueue',
'postsuper',
'sendmail',
),
),
) %}
{% for basename in basename_list -%}
[{{ section('wrapper-' ~ basename) }}]
< = {{ extend }}
basename = {{ basename }}
{% endfor %}
{% endfor %}
[{{ section('postfix-symlinks-libexec') }}]
recipe = slapos.cookbook:symbolic.link
target-directory = ${directory:usr}
link-binary =
{{ parameter_dict['postfix-location'] }}/usr/libexec
[{{ section('service-postfix-master') }}]
< = base-wrapper
command-line = ${directory:usr}/libexec/postfix/master
wrapper-path = ${directory:run}/postfix-master
[{{ section('postfix-promise') }}]
<= monitor-promise-base
promise = check_socket_listening
name = postfix.py
config-host = {{ ip }}
config-port = {{ tcpv4_port }}
[{{ section('promise-check-computer-memory') }}]
<= monitor-promise-base
promise = check_command_execute
name = check-computer-memory.py
config-command = "{{ parameter_dict["check-computer-memory-binary"] }}" -db ${monitor-instance-parameter:collector-db} --threshold "{{ slapparameter_dict["computer-memory-percent-threshold"] }}" --unit percent
[monitor-instance-parameter]
monitor-httpd-ipv6 = {{ (ipv6_set | list)[0] }}
monitor-httpd-port = {{ tcpv4_port + 2 }}
monitor-title = {{ slapparameter_dict['name'] }}
password = {{ slapparameter_dict['monitor-passwd'] }}
[buildout]
extends =
{{ template_monitor }}
parts =
{{ part_list | join('\n ') }}
{# instance that runs WCFS service associated with ZODB storage #}
{% from "instance_zodb_base" import zodb_dict with context %}
{# q(text) returns urllib.quote_plus(text) #}
{% macro q(text) %}{{ urllib_parse.quote_plus(text) }}{% endmacro %}
{# build zurl to connect to configured ZODB #}
{% if len(zodb_dict) != 1 -%}
{% do assert(False, ("WCFS supports only single ZODB storage", zodb_dict)) -%}
{% endif -%}
{% set db_name, zodb = zodb_dict.popitem() -%}
{% set z = zodb['storage-dict'] -%}
{% if zodb['type'] == 'zeo' -%}
{% set zurl = ('zeo://%s?storage=%s' % (z['server'], z['storage'])) -%}
{% elif zodb['type'] == 'neo' -%}
{# neo(s)://[credentials@]master1,master2,...,masterN/name?options #}
{# (see https://lab.nexedi.com/kirr/neo/blob/3e13fa06/go/neo/client.go#L417) #}
{# If 'ca' in storage-dict, ssl is true. #}
{# (see https://lab.nexedi.com/nexedi/slapos/blob/397726e1/stack/erp5/instance-zodb-base.cfg.in#L17-21) #}
{% if "ca" in z -%}
{# ca=ca.crt;cert=my.crt;key=my.key (see https://lab.nexedi.com/kirr/neo/blob/3e13fa06/go/neo/client.go#L428) #}
{% set zurl = 'neos://ca=%s;cert=%s;key=%s@' % (q(z.pop("ca")), q(z.pop("cert")), q(z.pop("key"))) -%}
{% else -%}
{% set zurl = 'neo://' -%}
{% endif -%}
{% set zurl = ('%s%s/%s' % (zurl, z.pop('master_nodes'), z.pop('name'))) -%}
{% set zurl = zurl + '?' + (z | dictsort | urlencode) -%}
{% else -%}
{% do assert(False, ("unsupported ZODB type", zodb)) -%}
{% endif -%}
[buildout]
extends = {{ template_monitor }}
parts +=
wcfs
wcfs-promise
publish
[directory]
recipe = slapos.cookbook:mkdirectory
etc = ${buildout:directory}/etc
log = ${:var}/log
run = ${:var}/run
services = ${:etc}/run
service-on-watch = ${:etc}/service
srv = ${buildout:directory}/srv
tmp = ${buildout:directory}/tmp
var = ${buildout:directory}/var
[wcfs]
recipe = slapos.cookbook:wrapper
command-line = {{ bin_directory }}/wcfs serve -log_dir=${directory:log} {{ zurl }}
wrapper-path = ${directory:service-on-watch}/wcfs
[wcfs-promise]
<= monitor-promise-base
promise = check_command_execute
name = ${:_buildout_section_name_}.py
config-command = {{ bin_directory }}/wcfs status {{ zurl }}
[publish]
recipe = slapos.cookbook:publish.serialised
serving-zurl = {{ zurl }}
{% set ports = itertools.count(slapparameter_dict['tcpv4-port']) -%}
{% set ipv4 = (ipv4_set | list)[0] -%}
{% set backup_periodicity = slapparameter_dict.get('backup-periodicity', 'daily') -%}
{% set part_list = [] -%}
{% macro section(name) %}{% do part_list.append(name) %}{{ name }}{% endmacro -%}
{% set storage_dict = {} -%}
{% set default_zodb_path = buildout_directory ~ '/srv/zodb' -%}
{% set default_backup_path = buildout_directory ~ '/srv/backup/zodb' -%}
{% set bin_directory = parameter_dict['buildout-bin-directory'] -%}
[zeo-base]
recipe = slapos.cookbook:zeo
log-path = ${directory:log}/${:base-name}.log
pid-path = ${directory:run}/${:base-name}.pid
conf-path = ${directory:etc}/${:base-name}.conf
wrapper-path = ${directory:services}/${:base-name}
binary-path = {{ bin_directory }}/runzeo
ip = {{ ipv4 }}
{% set known_tid_storage_identifier_dict = {} -%}
{% set zodb_dict = {} -%}
{% for name, zodb in six.iteritems(slapparameter_dict['zodb-dict']) -%}
{% do zodb_dict.setdefault(zodb.get('family', 'default').lower(), []).append((name, zodb)) -%}
{% endfor -%}
{% set tidstorage_port = slapparameter_dict['tcpv4-port'] + len(zodb_dict) -%}
{% for family, zodb in six.iteritems(zodb_dict) -%}
{% set storage_list = [] -%}
{% set current_port = next(ports) -%}
{% set known_tid_storage_identifier_host = (ipv4, current_port), -%}
{% for name, zodb in zodb -%}
{% do storage_dict.__setitem__(name, {'server': ipv4 ~ ':' ~ current_port, 'storage': name}) %}
{% set path = zodb.get('path', '%(zodb)s/%(name)s.fs') % {'zodb': default_zodb_path, 'name': name} -%}
{% do storage_list.append((name, path)) -%}
{% set backup_directory = zodb.get('backup', '%(backup)s/%(name)s') % {'backup': default_backup_path, 'name': name} -%}
{# BBB: No mount-point specified because they're meaningless for ZEO and
TIDStorage. Pass '' for compatibility, and not None
because this would disable TIDStorage bootstrapping. -#}
{% do known_tid_storage_identifier_dict.__setitem__(json_module.dumps(
(known_tid_storage_identifier_host, name)),
(path, backup_directory, '')) -%}
{% endfor -%}
{% set zeo_section_name = 'zeo-' ~ family %}
[{{ zeo_section_name }}]
< = zeo-base
base-name = zeo-{{ family }}
port = {{ current_port }}
storage = {{ dumps(storage_list) }}
[{{ section("logrotate-" ~ zeo_section_name) }}]
< = logrotate-entry-base
name = {{ "${" ~ zeo_section_name ~ ":base-name}" }}
log = {{ "${" ~ zeo_section_name ~ ":log-path}" }}
post = test ! -s {{ "${" ~ zeo_section_name ~":pid-path}" }} || {{ bin_directory }}/slapos-kill --pidfile {{ "${" ~ zeo_section_name ~ ":pid-path}" }} -s USR2
[{{ section(zeo_section_name ~ "-promise") }}]
<= monitor-promise-base
promise = check_socket_listening
name = zeo-{{ family }}.py
config-host = {{ "${" ~ zeo_section_name ~ ":ip}" }}
config-port = {{ "${" ~ zeo_section_name ~ ":port}" }}
{% endfor -%}
{% if backup_periodicity == 'never' -%}
{% set known_tid_storage_identifier_dict = () %}
{% set tidstorage_repozo_path = '' -%}
{% else -%}
[tidstorage]
recipe = slapos.cookbook:tidstorage
known-tid-storage-identifier-dict = {{ dumps(known_tid_storage_identifier_dict) }}
configuration-path = ${directory:etc}/tidstorage.py
ip = {{ ipv4 }}
port = {{ tidstorage_port }}
{% set tidstorage_repozo_path = slapparameter_dict.get('tidstorage-repozo-path', buildout_directory ~ '/srv/backup/tidstorage') -%}
timestamp-file-path = {{ tidstorage_repozo_path ~ '/repozo_tidstorage_timestamp.log' }}
{# BBB: recipe requires logfile-name for nothing because tidstorage runs in foreground mode -#}
logfile-name =
pidfile-name = ${directory:run}/tidstorage.pid
{# TODO: Add support for backup status file, so that the status file can be close to the ZODB (rather than close to the backup files). And do it efficiently, to not copy the whole status file every time. -#}
status-file = {{ tidstorage_repozo_path ~ '/tidstorage.tid' }}
tidstorage-repozo-binary = {{ bin_directory }}/tidstorage_repozo
tidstoraged-binary = {{ bin_directory }}/tidstoraged
repozo-binary = {{ bin_directory }}/repozo
repozo-wrapper = ${buildout:bin-directory}/tidstorage-repozo
{% if len(known_tid_storage_identifier_dict) > 1 -%}
tidstorage-wrapper = ${directory:services}/tidstoraged
[{{ section("promise-tidstorage") }}]
<= monitor-promise-base
promise = check_socket_listening
name = tidstorage.py
config-host = ${tidstorage:ip}
config-port = ${tidstorage:port}
{% endif -%}
[{{ section("cron-entry-tidstorage-backup") }}]
# TODO:
# - configurable full/incremental
# - configurable retention
recipe = slapos.cookbook:cron.d
cron-entries = ${cron:cron-entries}
name = tidstorage
time = {{ dumps(backup_periodicity) }}
command = ${tidstorage:repozo-wrapper}
# Used for ERP5 resiliency or (more probably)
# webrunner resiliency with erp5 inside.
[{{ section("resiliency-exclude-file") }}]
# Generate rdiff exclude file
recipe = slapos.recipe.template:jinja2
inline = {{ '{{ "${directory:zodb}/**\\n${directory:log}/**\\n" }}' }}
output = ${directory:srv}/exporter.exclude
[{{ section("resiliency-identity-signature-script")}}]
# Generate identity script used by webrunner to check data integrity
# It excludes repozo files as they already include a hash function
# used to check backups when rebuilding the datafs
recipe = slapos.cookbook:wrapper
command-line = {{ bin_directory }}/backup-identity-script-excluding-path --exclude-path "srv/backup/logrotate/**" --exclude-path "srv/backup/zodb/*/*fsz"
wrapper-path = ${directory:srv}/.backup_identity_script
mode = 770
[{{ section("resiliency-after-import-script") }}]
# Generate after import script used by importer instance of webrunner
recipe = collective.recipe.template
input = inline: #!/bin/sh
# DO NOT RUN THIS SCRIPT ON PRODUCTION INSTANCE
# OR ZODB DATA WILL BE ERASED.
# This script will restore the repozo backup to the real
# zodb location. It is launched by the clone (importer) instance of webrunner
# in the end of the import script.
# Depending on the output, it will create a file containing
# the status of the restoration (success or failure).
zodb_directory="${directory:zodb}"
zodb_backup_directory="{{ default_backup_path }}"
repozo="${tidstorage:repozo-binary}"
EXIT_CODE=0
{% for family, zodb in six.iteritems(zodb_dict) -%}
{% for name, zodb in zodb -%}
{% set zeo_section_name = 'zeo-' ~ family %}
storage_name="{{ name }}"
zodb_path="$storage_name.fs"
pid_file={{ "${" ~ zeo_section_name ~ ":pid-path}" }}
if [ -e "$pid_file" ]; then
pid=$(cat $pid_file) > /dev/null 2>&1
if kill -0 "$pid"; then
echo "Zeo is already running with pid $pid. Aborting."
exit 1
fi
fi
echo "Removing $zodb_path..."
echo "Restoring $storage_name into $zodb_path..."
$repozo --verify --repository="$zodb_backup_directory/$storage_name"
CURRENT_EXIT_CODE=$?
if [ ! "$CURRENT_EXIT_CODE"="0" ]; then
echo "$storage_name Backup verification failed. Backup data is inconsistent."
exit "$CURRENT_EXIT_CODE"
fi
$repozo --recover --output="$zodb_directory/$zodb_path" --repository="$zodb_backup_directory/$storage_name"
CURRENT_EXIT_CODE=$?
if [ ! "$CURRENT_EXIT_CODE"="0" ]; then
EXIT_CODE="$CURRENT_EXIT_CODE"
echo "$storage_name Backup restoration failed."
fi
{% endfor -%}
{% endfor -%}
exit $EXIT_CODE
output = ${directory:srv}/runner-import-restore
mode = 755
{% endif -%}
[{{ section('promise-check-computer-memory') }}]
<= monitor-promise-base
promise = check_command_execute
name = check-computer-memory.py
config-command = "{{ parameter_dict["check-computer-memory-binary"] }}" -db ${monitor-instance-parameter:collector-db} --threshold "{{ slapparameter_dict["computer-memory-percent-threshold"] }}" --unit percent
[publish]
recipe = slapos.cookbook:publish.serialised
storage-dict = {{ dumps(storage_dict) }}
{% if len(known_tid_storage_identifier_dict) > 1 -%}
tidstorage-ip = ${tidstorage:ip}
tidstorage-port = ${tidstorage:port}
{% else -%}
tidstorage-ip =
tidstorage-port =
{% endif -%}
monitor-base-url = ${monitor-publish-parameters:monitor-base-url}
[directory]
recipe = slapos.cookbook:mkdirectory
bin = ${buildout:directory}/bin
etc = ${buildout:directory}/etc
services = ${:etc}/run
plugin = ${:etc}/plugin
srv = ${buildout:directory}/srv
var = ${buildout:directory}/var
log = ${:var}/log
run = ${:var}/run
backup-zodb = {{ default_backup_path }}
zodb = {{ default_zodb_path }}
tidstorage = {{ tidstorage_repozo_path }}
[monitor-instance-parameter]
monitor-httpd-ipv6 = {{ (ipv6_set | list)[0] }}
monitor-httpd-port = {{ next(ports) }}
monitor-title = {{ slapparameter_dict['name'] }}
password = {{ slapparameter_dict['monitor-passwd'] }}
[buildout]
extends =
{{ template_monitor }}
parts +=
{{ part_list | join('\n ') }}
publish
{# base for instances that need to access ZODB storage #}
{# provides zodb_dict #}
{% set zodb_dict = slapparameter_dict['zodb-dict'] -%}
{% set zeo_dict = slapparameter_dict.get('zodb-zeo', {}) -%}
{% for name, zodb in six.iteritems(zodb_dict) -%}
{% set storage_dict = zodb.setdefault('storage-dict', {}) -%}
{% if zodb['type'] == 'zeo' -%}
{% do storage_dict.update(zeo_dict.get(name, ())) -%}
{% else -%}
{% if name == slapparameter_dict.get('neo-name') -%}
{% do storage_dict.update(master_nodes=slapparameter_dict['neo-masters'],
name=slapparameter_dict['neo-cluster']) -%}
{% endif -%}
{{ assert(storage_dict['master_nodes'], name) }}
{% if storage_dict.pop('ssl', 1) -%}
{% do storage_dict.update(ca='~/etc/ca.crt',
cert='~/etc/neo.crt',
key='~/etc/neo.key') -%}
{% endif -%}
{% endif -%}
{% endfor -%}
{% from "instance_zodb_base" import zodb_dict with context %}
{% set wsgi = slapparameter_dict['wsgi'] -%}
{% set webdav = slapparameter_dict['webdav'] -%}
{% set use_ipv6 = slapparameter_dict.get('use-ipv6', False) -%}
{% set ports = itertools.count(slapparameter_dict['port-base']) -%}
{% set site_id = slapparameter_dict['site-id'] -%}
{% set instance_index_list = range(slapparameter_dict['instance-count']) -%}
{% set node_id_base = slapparameter_dict['name'] -%}
{% set selenium_server_configuration_dict = slapparameter_dict.get('selenium-server-configuration-dict', None) -%}
{% set node_id_index_format = '-%%0%ii' % (len(str(instance_index_list[-1])), ) -%}
{% set part_list = [] -%}
{% set publish_list = [] -%}
{% set test_runner_address_list = [] -%}
{% set test_runner_enabled = slapparameter_dict['test-runner-enabled'] -%}
{% set test_runner_node_count = slapparameter_dict['test-runner-node-count'] -%}
{% set test_runner_random_activity_priority = slapparameter_dict['test-runner-random-activity-priority'] -%}
{% set longrequest_logger_base_path = buildout_directory ~ '/var/log/longrequest_logger_' -%}
{% if webdav -%}
{% set timerserver_interval = 0 -%}
{% else -%}
{% set timerserver_interval = slapparameter_dict['timerserver-interval'] -%}
{%- endif %}
{% macro section(name) %}{% do part_list.append(name) %}{{ name }}{% endmacro -%}
{% set bin_directory = parameter_dict['buildout-bin-directory'] -%}
{#
XXX: This template only supports exactly one IPv4 and one IPv6 per
partition. No more (undefined result), no less (IndexError).
-#}
{% set ipv4 = (ipv4_set | list)[0] -%}
{% set publishable_hosts_dict = {} -%}
{% set port_dict = {} -%}
{% for alias, url in (
('erp5-memcached-volatile', slapparameter_dict['memcached-url']),
('erp5-memcached-persistent', slapparameter_dict['kumofs-url']),
('erp5-smtp', slapparameter_dict['smtp-url']),
) -%}
{% set parsed_url = urllib_parse.urlparse(url) -%}
{% do port_dict.__setitem__(alias, parsed_url.port) -%}
{% do publishable_hosts_dict.__setitem__(alias, parsed_url.hostname) -%}
{%- endfor %}
{% for i, url in enumerate(slapparameter_dict['mysql-url-list']) -%}
{% do publishable_hosts_dict.__setitem__(
'erp5-catalog-' ~ i,
urllib_parse.urlparse(url).hostname,
) -%}
{%- endfor %}
{% do publishable_hosts_dict.update(slapparameter_dict['hosts-dict']) -%}
{% set hosts_dict = publishable_hosts_dict.copy() -%}
{% do hosts_dict.update(slapparameter_dict['computer-hosts-dict'].get(computer, {})) -%}
[jinja2-template-base]
recipe = slapos.recipe.template:jinja2
[erp5-kernel]
recipe = slapos.cookbook:symbolic.link
link-binary = {{ parameter_dict['erp5-kernel-location'] }}/{{ parameter_dict['erp5-kernel-filename'] }}
target-directory = ${directory:erp5-kernel-dir}
[kernel-json-dir]
<= jinja2-template-base
template = {{ parameter_dict['kernel-json-location'] }}/{{ parameter_dict['kernel-json-filename'] }}
rendered = ${directory:erp5-kernel-dir}/kernel.json
context =
raw erp5_url {{ ipv4 }}
raw python_executable {{ parameter_dict['python-executable-for-kernel'] }}
raw kernel_dir ${erp5-kernel:target-directory}/{{ parameter_dict['erp5-kernel-filename'] }}
raw display_name ERP5
raw language_name python
[run-common]
<= userhosts-wrapper-base
environment-extra =
environment +=
TMP=${directory:tmp}
TMPDIR=${directory:tmp}
HOME=${buildout:directory}
PATH=${binary-link:target-directory}:{{ parameter_dict['coreutils'] }}/bin:{{ parameter_dict['perl_dbd_mariadb_path'] }}
TZ={{ slapparameter_dict['timezone'] }}
MATPLOTLIBRC={{ parameter_dict['matplotlibrc'] }}
INSTANCE_HOME=${:instance-home}
CAUCASE={{ slapparameter_dict['caucase-url'] }}
FONTCONFIG_FILE=${fontconfig-conf:output}
JUPYTER_PATH=${directory:jupyter-dir}
JUPYTER_CONFIG_DIR=${directory:jupyter-config-dir}
JUPYTER_RUNTIME_DIR=${directory:jupyter-runtime-dir}
{% if slapparameter_dict.get('wendelin-core-zblk-fmt') %}
WENDELIN_CORE_ZBLK_FMT={{ slapparameter_dict['wendelin-core-zblk-fmt'] }}
{% endif %}
WENDELIN_CORE_WCFS_AUTOSTART=no
{% if slapparameter_dict['wcfs_enable'] %}
WENDELIN_CORE_VIRTMEM=r:wcfs+w:uvmm
{% else %}
WENDELIN_CORE_VIRTMEM=rw:uvmm
{% endif %}
${:environment-extra}
[directory]
recipe = slapos.cookbook:mkdirectory
bin = ${buildout:directory}/bin
etc = ${buildout:directory}/etc
instance = ${:srv}/erp5shared
instance-constraint = ${:instance}/Constraint
instance-document = ${:instance}/Document
instance-etc = ${:instance}/etc
instance-etc-package-include = ${:instance}/etc/package-include
instance-extensions = ${:instance}/Extensions
instance-import = ${:instance}/import
instance-lib = ${:instance}/lib
instance-products = ${:instance}/Products
instance-propertysheet = ${:instance}/PropertySheet
instance-tests = ${:instance}/tests
log = ${:var}/log
run = ${:var}/run
services = ${:etc}/run
service-on-watch = ${:etc}/service
srv = ${buildout:directory}/srv
ca-dir = ${:srv}/ssl
tmp = ${buildout:directory}/tmp
var = ${buildout:directory}/var
plugin = ${:etc}/plugin
unit-test-path = ${:srv}/test-instance/unit_test
fontconfig-cache = ${buildout:directory}/.fontconfig
jupyter-dir = ${buildout:directory}/jupyter
jupyter-config-dir = ${:jupyter-dir}/etc
jupyter-runtime-dir = ${:jupyter-dir}/runtime
jupyter-kernel-dir = ${:jupyter-dir}/kernels
erp5-kernel-dir = ${:jupyter-kernel-dir}/ERP5
[fontconfig-conf]
recipe = slapos.recipe.template:jinja2
url = {{ parameter_dict['template-fonts-conf'] }}
output = ${directory:etc}/fonts.conf
context =
key cachedir directory:fontconfig-cache
key fonts :fonts
key includes :includes
fonts =
{% for font in parameter_dict['fonts'].splitlines() %}
{{ font }}
{% endfor %}
includes =
{% for include in parameter_dict['fontconfig-includes'].splitlines() %}
{{ include }}
{% endfor %}
# Used for ERP5 resiliency or (more probably)
# webrunner resiliency with erp5 inside.
[{{ section("resiliency-exclude-file") }}]
# Generate rdiff exclude file
recipe = slapos.recipe.template:jinja2
inline = {{ '{{ "${directory:log}/**\\n${directory:tmp}/**\\n" }}' }}
output = ${directory:srv}/exporter.exclude
[{{ section("resiliency-identity-signature-script")}}]
# Generate identity script used by webrunner to check data integrity
recipe = slapos.cookbook:wrapper
command-line = {{ bin_directory }}/backup-identity-script-excluding-path --exclude-path "srv/backup/logrotate/**"
wrapper-path = ${directory:srv}/.backup_identity_script
mode = 770
[binary-link]
recipe = slapos.cookbook:symbolic.link
target-directory = ${directory:bin}
link-binary = {{ dumps(parameter_dict['link-binary']) }}
{% if use_ipv6 -%}
{% set ipv6 = (ipv6_set | list)[0] -%}
[ipv6toipv4-base]
recipe = slapos.cookbook:ipv6toipv4
runner-path = ${directory:services}/${:base-name}
6tunnel-path = {{ parameter_dict['6tunnel'] }}/bin/6tunnel
shell-path = {{ parameter_dict['dash'] }}/bin/dash
ipv4 = {{ ipv4 }}
ipv6 = {{ ipv6 }}
{% endif -%}
[hosts-parameter]
# Used for both hosts and hostaliases sections.
host-dict = {{ dumps(hosts_dict) }}
hostalias-dict = {{ dumps(slapparameter_dict['hostalias-dict']) }}
# Note: there is a subtle difference between hosts and hostaliases files:
# - hosts files start with resolved, followed by alias(es) (only one alias per
# line in this case)
# - hostaliases start with alias, followed by resolved
# ...so it's not possible to merge these templates (not a big deal anyway).
[hostaliases]
< = jinja2-template-base
inline = {{ '
{% for alias, aliased in host_dict.items() -%}
{{ alias }} {{ aliased }}
{% endfor %}
' }}
output = ${directory:etc}/hostaliases
context = key host_dict hosts-parameter:hostalias-dict
[hosts]
< = jinja2-template-base
inline = {{ '
{% for alias, aliased in host_dict.items() -%}
{{ aliased }} {{ alias }}
{% endfor %}
' }}
output = ${directory:etc}/hosts
context = key host_dict hosts-parameter:host-dict
[userhosts-bin]
recipe = slapos.recipe.template:jinja2
inline =
#!{{ parameter_dict['dash'] }}/bin/dash
export HOSTS_FILE=${hosts:output}
export LD_PRELOAD={{ parameter_dict['userhosts'] }}:$LD_PRELOAD
exec "$@"
output = ${directory:bin}/userhosts
[userhosts-wrapper-base]
recipe = slapos.cookbook:wrapper
environment =
HOSTALIASES=${hostaliases:output}
command-line = '${userhosts-bin:output}' ${:wrapped-command-line}
{# Hack to deploy SSL certs via instance parameters -#}
{% for zodb in six.itervalues(zodb_dict) -%}
{% set storage_dict = zodb.setdefault('storage-dict', {}) -%}
{% if zodb['type'] == 'neo' and storage_dict.get('ssl', 1) -%}
{% for k, v in (('_ca', 'ca.crt'),
('_cert', 'neo.crt'),
('_key', 'neo.key')) -%}
{% if k in storage_dict -%}
[{{ section('neo-ssl-' + k[1:]) }}]
< = jinja2-template-base
output = ${directory:etc}/{{v}}
inline = {{'{{'}}pem}}
context = key pem :pem
pem = {{dumps(storage_dict.pop(k))}}
{% endif -%}
{% endfor -%}
{% endif -%}
{% endfor -%}
{# endhack -#}
[runzope-base]
<= run-common
instance-home = ${directory:instance}
{% if wsgi -%}
wrapped-command-line = '{{ bin_directory }}/runwsgi' {% if webdav %}-w{% endif %} {{ ipv4 }}:${:port} {% if timerserver_interval %}--timerserver-interval={{ timerserver_interval }}{% endif %} '${:configuration-file}'
{% else -%}
wrapped-command-line = '{{ bin_directory }}/runzope' -C '${:configuration-file}'
{%- endif %}
{%- set private_dev_shm = slapparameter_dict['private-dev-shm'] %}
{%- if private_dev_shm %}
private-tmpfs = {{ private_dev_shm }} /dev/shm
{%- endif %}
[{{ section('zcml') }}]
recipe = slapos.cookbook:copyfilelist
target-directory = ${directory:instance-etc}
file-list = {{ parameter_dict['site-zcml'] }}
[{{ section('zope-inituser') }}]
< = jinja2-template-base
output = ${directory:instance}/inituser
inline = {{ slapparameter_dict['inituser-login'] }}:{SHA}{{ base64.b64encode(hashlib.sha1(slapparameter_dict['inituser-password'].encode('utf-8')).digest()) }}
once = ${:output}_done
[zope-conf-parameter-base]
ip = {{ ipv4 }}
site-id = {{ site_id }}
{% if site_id -%}
mysql-url = {{ slapparameter_dict['mysql-url-list'][0] }}
inituser = {{ slapparameter_dict['inituser-login'] }}
{% set mysql = urllib_parse.urlsplit(slapparameter_dict['mysql-url-list'][0]) -%}
{% set mysql_db = mysql.path.split('/')[1] -%}
sql-connection-string = {{ '%s@erp5-catalog-0:%s %s %s' % (
mysql_db, mysql.port, mysql.username, mysql.password) }}
bt5 = {{ slapparameter_dict['bt5'] }}
bt5-repository-url = {{ slapparameter_dict['bt5-repository-url'] }}
id-store-interval = {{ dumps(slapparameter_dict['id-store-interval']) }}
home = ${buildout:directory}
cloudooo-url = {{ ",".join(slapparameter_dict['cloudooo-url-list']) }}
{% endif -%}
developer-list = {{ dumps(slapparameter_dict['developer-list']) }}
publisher-timeout = {{ dumps(slapparameter_dict['publisher-timeout']) }}
activity-timeout = {{ dumps(slapparameter_dict['activity-timeout']) }}
instance = ${directory:instance}
instance-products = ${directory:instance-products}
deadlock-path = /manage_debug_threads
deadlock-debugger-password = {{ dumps(slapparameter_dict['deadlock-debugger-password']) }}
{% if slapparameter_dict.get('tidstorage-ip') -%}
tidstorage-ip = {{ dumps(slapparameter_dict['tidstorage-ip']) }}
tidstorage-port = {{ dumps(slapparameter_dict['tidstorage-port']) }}
{% endif -%}
{% set thread_amount = slapparameter_dict['thread-amount'] -%}
{% set large_file_threshold = slapparameter_dict['large-file-threshold'] -%}
thread-amount = {{ thread_amount }}
webdav = {{ dumps(webdav) }}
wsgi = {{ dumps(wsgi) }}
timerserver-interval = {{ dumps(timerserver_interval) }}
[zope-conf-base]
< = jinja2-template-base
url = {{ parameter_dict['zope-conf-template'] }}
extensions =
jinja2.ext.do
jinja2.ext.loopcontrols
import-list =
rawfile root_common {{ root_common }}
{% macro zope(
index,
port,
longrequest_logger_timeout,
longrequest_logger_interval
) -%}
{% set name = 'zope-' ~ index -%}
{% set conf_name = name ~ '-conf' -%}
{% set conf_parameter_name = conf_name ~ '-param' -%}
{% set zope_tunnel_section_name = name ~ '-ipv6toipv4' -%}
{% set zope_tunnel_base_name = zope_tunnel_section_name -%}
[{{ conf_parameter_name }}]
< = zope-conf-parameter-base
pid-file = ${directory:run}/{{ name }}.pid
lock-file = ${directory:run}/{{ name }}.lock
port = {{ port }}
event-log = ${directory:log}/{{ name }}-event.log
z2-log = ${directory:log}/{{ name }}-Z2.log
node-id = {{ dumps(node_id_base ~ (node_id_index_format % index)) }}
{% set log_list = [] -%}
{% set import_set = set() -%}
{% for db_name, zodb in six.iteritems(zodb_dict) -%}
{% do zodb.setdefault('pool-size', thread_amount) -%}
{% if zodb['type'] == 'neo' -%}
{% do import_set.add('neo.client') -%}
{% set log = name ~ '-neo-' ~ db_name ~ '.log' -%}
{% do log_list.append('${directory:log}/' + log) -%}
{% do zodb['storage-dict'].update(logfile='~/var/log/'+log) -%}
{% endif -%}
{% endfor -%}
import-list = {{ dumps(list(import_set)) }}
zodb-dict = {{ dumps(zodb_dict) }}
large-file-threshold = {{ large_file_threshold }}
{% if longrequest_logger_interval > 0 -%}
longrequest-logger-file = {{ longrequest_logger_base_path ~ name ~ ".log" }}
longrequest-logger-timeout = {{ longrequest_logger_timeout }}
longrequest-logger-interval = {{ longrequest_logger_interval }}
{% else -%}
longrequest-logger-file =
{% endif -%}
[{{ conf_name }}]
< = zope-conf-base
output = ${directory:etc}/{{ name }}.conf
context =
section parameter_dict {{ conf_parameter_name }}
import os os
import re re
[{{ section(name) }}]
< = runzope-base
wrapper-path = ${directory:service-on-watch}/{{ name }}
configuration-file = {{ '${' ~ conf_name ~ ':output}' }}
{%- if wsgi %}
port = {{ port }}
{%- endif %}
hash-files =
${:configuration-file}
hash-existing-files =
${buildout:directory}/software_release/buildout.cfg
[{{ section("promise-" ~ name) }}]
{% if six.PY3 -%}
# Disable the promise in Python 3. ERP5 is not compatible with Python 3 yet, so
# the promise would always fail.
recipe =
{% else -%}
<= monitor-promise-base
promise = check_socket_listening
name = {{ name }}.py
config-host = {{ ipv4 }}
config-port = {{ port }}
{% endif -%}
{% if use_ipv6 -%}
[{{ zope_tunnel_section_name }}]
< = ipv6toipv4-base
base-name = {{ zope_tunnel_base_name }}
ipv6-port = {{ port }}
ipv4-port = {{ port }}
{% do publish_list.append(("[" ~ ipv6 ~ "]:" ~ port, thread_amount, webdav)) -%}
[{{ section("promise-tunnel-" ~ name) }}]
<= monitor-promise-base
promise = check_socket_listening
name = {{ zope_tunnel_base_name }}.py
config-host = {{ '${' ~ zope_tunnel_section_name ~ ':ipv6}' }}
config-port = {{ '${' ~ zope_tunnel_section_name ~ ':ipv6-port}' }}
{% else -%}
{% do publish_list.append((ipv4 ~ ":" ~ port, thread_amount, webdav)) -%}
{% endif -%}
{% if longrequest_logger_interval > 0 -%}
[{{ section('promise-check-' ~name ~ '-longrequest-error-log') }}]
<= monitor-promise-base
promise = check_error_on_zope_longrequest_log
name = {{'check-' ~ name ~ '-longrequest-error-log.py'}}
config-log-file = {{ '${' ~ conf_parameter_name ~ ':longrequest-logger-file}' }}
config-error-threshold = {{ slapparameter_dict["zope-longrequest-logger-error-threshold"] }}
config-maximum-delay = {{ slapparameter_dict["zope-longrequest-logger-maximum-delay"] }}
{% endif -%}
[{{ section('logrotate-entry-' ~ name) }}]
< = logrotate-entry-base
name = {{ name }}
log = {{ '${' ~ conf_parameter_name ~ ':event-log}' }} {{ '${' ~ conf_parameter_name ~ ':z2-log}' }} {{ '${' ~ conf_parameter_name ~ ':longrequest-logger-file}' }} {{ ' '.join(log_list) }}
post = test ! -s {{ '${' ~ conf_parameter_name ~ ':pid-file}' }} || {{ bin_directory }}/slapos-kill --pidfile {{ '${' ~ conf_parameter_name ~ ':pid-file}' }} -s USR2
{% endmacro -%}
{% for i in instance_index_list -%}
{{ zope(
i,
next(ports),
slapparameter_dict['longrequest-logger-timeout'],
slapparameter_dict['longrequest-logger-interval'],
) }}
{% endfor -%}
[{{ section("watch_activities") }}]
<= userhosts-wrapper-base
environment +=
MYSQL=${binary-link:target-directory}/mysql
wrapped-command-line = {{ parameter_dict['erp5-location'] }}/product/CMFActivity/bin/${:_buildout_section_name_} "-h erp5-catalog-0 -P {{ mysql.port }} -u {{ mysql.username }} -p{{ mysql.password}} {{ mysql_db }}" 5 3600
wrapper-path = ${buildout:bin-directory}/${:_buildout_section_name_}
[{{ section("wait_activities") }}]
<= watch_activities
{% if test_runner_enabled and test_runner_node_count -%}
{% for _ in range(test_runner_node_count) %}
{% do test_runner_address_list.append((ipv4, next(ports))) %}
{% endfor %}
{% if selenium_server_configuration_dict -%}
[test-zelenium-runner-parameter]
configuration = {{ dumps(selenium_server_configuration_dict) }}
user = {{ dumps(slapparameter_dict['inituser-login']) }}
password = {{ dumps(slapparameter_dict['inituser-password']) }}
bin-path = {{ bin_directory }}/{{ parameter_dict['egg-interpreter'] }}
[{{ section('test-zelenium-runner') }}]
<= jinja2-template-base
url = {{ parameter_dict['run-zelenium-template'] }}
output = ${directory:bin}/runTestSuite
context =
import json_module json
key configuration test-zelenium-runner-parameter:configuration
key user test-zelenium-runner-parameter:user
key password test-zelenium-runner-parameter:password
key bin_path test-zelenium-runner-parameter:bin-path
{% else -%}
[{{ section('run-unit-test-userhosts-wrapper') }}]
<= userhosts-wrapper-base
wrapped-command-line = ${runUnitTest:wrapper-path}
wrapper-path = ${buildout:bin-directory}/runUnitTest
[{{ section('run-test-suite-userhosts-wrapper') }}]
<= userhosts-wrapper-base
wrapped-command-line = ${runTestSuite:wrapper-path}
wrapper-path = ${buildout:bin-directory}/runTestSuite
{% set connection_string_list = [] -%}
{% for url in slapparameter_dict['mysql-test-url-list'] -%}
{% set parsed_url = urllib_parse.urlparse(url) -%}
{% do connection_string_list.append(
'%s@%s:%s %s %s' % (
parsed_url.path.lstrip('/'),
parsed_url.hostname,
parsed_url.port,
parsed_url.username,
parsed_url.password,
),
) -%}
{% endfor -%}
[run-test-common]
< = run-common
environment-extra =
REAL_INSTANCE_HOME=${:instance-home}
SQLBENCH_PATH={{ parameter_dict['sqlbench_path'] }}
TEST_CA_PATH=${directory:ca-dir}
ERP5_TEST_RUNNER_CONFIGURATION=${test-runner-configuration:output}
{%- if wsgi %}
erp5_wsgi=1
{%- endif %}
instance-home = ${directory:unit-test-path}
wrapper-path = ${directory:bin}/${:command-name}.real
command-line =
'{{ parameter_dict['bin-directory'] }}/${:command-name}'
${:command-line-extra}
--conversion_server_url={{ ",".join(slapparameter_dict['cloudooo-url-list']) }}
--conversion_server_retry_count={{ slapparameter_dict.get('cloudooo-retry-count', 2) }}
--volatile_memcached_server_hostname=erp5-memcached-volatile
--volatile_memcached_server_port={{ port_dict['erp5-memcached-volatile'] }}
--persistent_memcached_server_hostname=erp5-memcached-persistent
--persistent_memcached_server_port={{ port_dict['erp5-memcached-persistent'] }}
[test-runner-configuration]
recipe = slapos.recipe.template:jinja2
output = ${directory:etc}/${:_buildout_section_name_}.json
inline =
{{ json.dumps(slapparameter_dict['test-runner-configuration']) }}
[{{ section('runUnitTest') }}]
< = run-test-common
command-name = runUnitTest
command-line-extra =
--erp5_sql_connection_string '{{ connection_string_list[0] }}'
--extra_sql_connection_string_list '{{ ','.join(connection_string_list[1:]) }}'
--zserver {{ test_runner_address_list[0][0] ~ ':' ~ test_runner_address_list[0][1] }}
--zserver_frontend_url {{ slapparameter_dict['test-runner-apache-url-list'][0] }}
{% if test_runner_random_activity_priority is not none %}
--random_activity_priority={{ test_runner_random_activity_priority }}
{%- endif %}
[{{ section('runTestSuite') }}]
< = run-test-common
command-name = runTestSuite
command-line-extra =
--db_list '{{ ','.join(connection_string_list) }}'
environment-extra +=
{#- turn a list of (ip, port) in a list of 'ip:port' #}
{% set zserver_address_list = [] -%}
{% for ip, port in test_runner_address_list %}
{% do zserver_address_list.append(ip ~ ':' ~ port) %}
{% endfor -%}
zserver_address_list={{ ','.join(zserver_address_list) }}
zserver_frontend_url_list={{ ','.join(slapparameter_dict['test-runner-apache-url-list']) }}
[promise-test-runner-apache-url-executable]
# promise to wait for apache partition to have returned the parameter
recipe = slapos.cookbook:check_parameter
value = {{ slapparameter_dict['test-runner-apache-url-list'] }}
expected-not-value = not-ready
path = ${directory:bin}/${:_buildout_section_name_}
expected-value =
[{{ section("promise-test-runner-apache-url") }}]
<= monitor-promise-base
promise = check_command_execute
name = ${:_buildout_section_name_}.py
config-command = ${promise-test-runner-apache-url-executable:path}
{%- endif %}
{%- endif %}
[{{ section('promise-check-computer-memory') }}]
<= monitor-promise-base
promise = check_command_execute
name = check-computer-memory.py
config-command = "{{ parameter_dict["check-computer-memory-binary"] }}" -db ${monitor-instance-parameter:collector-db} --threshold "{{ slapparameter_dict["computer-memory-percent-threshold"] }}" --unit percent
[publish]
recipe = slapos.cookbook:publish.serialised
zope-address-list = {{ dumps(publish_list) }}
{#
Note: hosts_dist is generated at zope level rather than at erp5 (root partition)
level, as it is easier: we can access urls as python values trivially here.
This has the downside of making each zope partition publish the (hopefuly) same
dict toward erp5 partition, violating the DRY principle and making the intent
hard to guess.
-#}
hosts-dict = {{ dumps(publishable_hosts_dict) }}
monitor-base-url = ${monitor-publish-parameters:monitor-base-url}
test-runner-address-list = {{ dumps(test_runner_address_list) }}
software-release-url = ${slap-connection:software-release-url}
[monitor-instance-parameter]
monitor-httpd-ipv6 = {{ (ipv6_set | list)[0] }}
monitor-httpd-port = {{ next(ports) }}
monitor-title = {{ slapparameter_dict['name'] }}
password = {{ slapparameter_dict['monitor-passwd'] }}
[buildout]
extends =
{{ template_monitor }}
parts +=
{{ '\n '.join(part_list) }}
publish
erp5-kernel
kernel-json-dir
[buildout]
extends =
{{ instance_common_cfg }}
[default-dynamic-template-parameters]
bin-directory = {{ bin_directory }}
buildout-bin-directory = {{ buildout_bin_directory }}
check-computer-memory-binary = {{ bin_directory }}/check-computer-memory
[dynamic-template-postfix-parameters]
<= default-dynamic-template-parameters
cyrus-sasl-location = {{ cyrus_sasl_location }}
openssl = {{ openssl_location }}
postfix-location = {{ postfix_location }}
template-postfix-aliases = {{ template_postfix_aliases }}
template-postfix-main-cf = {{ template_postfix_main_cf }}
template-postfix-master-cf = {{ template_postfix_master_cf }}
xz-utils-location = {{ xz_utils_location }}
[dynamic-template-postfix]
< = jinja2-template-base
url = {{ template_postfix }}
filename = instance-postfix.cfg
extensions = jinja2.ext.do
extra-context =
section parameter_dict dynamic-template-postfix-parameters
import urllib urllib
[default-cloudooo-url-list]
recipe = slapos.recipe.build
default-url-list =
{{ default_cloudooo_url_list | indent }}
init =
# Expose the default cloudooo URLs, sorted using a pseudo-random seed,
# so that the same instance keep the same order
import six, random
seed = repr(sorted(six.iteritems(self.buildout['slap-connection'])))
default_cloudooo_url_list = [url.strip() for url in self.options['default-url-list'].splitlines()]
random.Random(seed).shuffle(default_cloudooo_url_list)
self.options['url-list'] = default_cloudooo_url_list
[dynamic-template-erp5-parameters]
default-cloudooo-url-list = ${default-cloudooo-url-list:url-list}
jupyter-enable-default = {{ jupyter_enable_default }}
wcfs-enable-default = {{ wcfs_enable_default }}
local-bt5-repository = {{ ' '.join(local_bt5_repository.split()) }}
[context]
root-common = {{ root_common }}
caucase-jinja2-library = {{ caucase_jinja2_library }}
template-zodb-base = {{ template_zodb_base }}
[dynamic-template-erp5]
<= jinja2-template-base
url = {{ template_erp5 }}
filename = instance-erp5.cfg
extra-context =
key default_cloudooo_url_list dynamic-template-erp5-parameters:default-cloudooo-url-list
key jupyter_enable_default dynamic-template-erp5-parameters:jupyter-enable-default
key wcfs_enable_default dynamic-template-erp5-parameters:wcfs-enable-default
key local_bt5_repository dynamic-template-erp5-parameters:local-bt5-repository
key openssl_location :openssl-location
import re re
import urllib_parse six.moves.urllib.parse
import-list =
file root_common context:root-common
file caucase context:caucase-jinja2-library
openssl-location = {{ openssl_location }}
[dynamic-template-balancer-parameters]
<= default-dynamic-template-parameters
openssl = {{ openssl_location }}
haproxy = {{ haproxy_location }}
rsyslogd = {{ rsyslogd_location }}
socat = {{ socat_location }}
apachedex-location = {{ bin_directory }}/apachedex
run-apachedex-location = {{ bin_directory }}/runApacheDex
promise-check-apachedex-result = {{ bin_directory }}/check-apachedex-result
template-haproxy-cfg = {{ template_haproxy_cfg }}
template-rsyslogd-cfg = {{ template_rsyslogd_cfg }}
# XXX: only used in software/slapos-master:
apache = {{ apache_location }}
template-apache-conf = {{ template_apache_conf }}
[dynamic-template-balancer]
<= jinja2-template-base
url = {{ template_balancer }}
filename = instance-balancer.cfg
extra-context =
section parameter_dict dynamic-template-balancer-parameters
import itertools itertools
import hashlib hashlib
import functools functools
import-list =
file caucase context:caucase-jinja2-library
[dynamic-template-zeo-parameters]
<= default-dynamic-template-parameters
[dynamic-template-zeo]
<= jinja2-template-base
url = {{ template_zeo }}
filename = instance-zeo.cfg
extra-context =
key buildout_directory buildout:directory
section parameter_dict dynamic-template-zeo-parameters
import json_module json
import itertools itertools
[dynamic-template-zope-parameters]
<= default-dynamic-template-parameters
zope-conf-template = {{ template_zope_conf }}
run-zelenium-template = {{ template_run_zelenium }}
6tunnel = {{ sixtunnel_location }}
coreutils = {{ coreutils_location }}
sqlbench_path = {{ mariadb_location }}/sql-bench
perl_dbd_mariadb_path = {{ perl_dbd_mariadb_path }}
dash = {{ dash_location }}
link-binary = {{ dumps(zope_link_binary) }}
fonts = {{ dumps(zope_fonts) }}
fontconfig-includes = {{ dumps(zope_fontconfig_includes) }}
template-fonts-conf = {{ dumps(template_fonts_conf) }}
userhosts = {{ userhosts_location }}/lib/libuserhosts.so
site-zcml = {{ site_zcml }}
extra-path-list = {{ dumps(extra_path_list) }}
matplotlibrc = {{ matplotlibrc_location }}
erp5-location = {{ erp5_location }}
egg-interpreter = {{egg_interpreter}}
erp5-kernel-location = {{ erp5_kernel_location }}
erp5-kernel-filename = {{ erp5_kernel_filename }}
kernel-json-location = {{ kernel_json_location }}
kernel-json-filename = {{ kernel_json_filename }}
python-executable-for-kernel = {{ python_executable_for_kernel }}
[dynamic-template-zope]
<= jinja2-template-base
url = {{ template_zope }}
filename = instance-zope.cfg
extra-context =
key buildout_directory buildout:directory
key root_common context:root-common
section parameter_dict dynamic-template-zope-parameters
import base64 base64
import urllib_parse six.moves.urllib.parse
import hashlib hashlib
import itertools itertools
import json json
import-list =
file instance_zodb_base context:template-zodb-base
[dynamic-template-kumofs-parameters]
<= default-dynamic-template-parameters
dash-location = {{ dash_location }}
dcron-location = {{ dcron_location }}
gzip-location = {{ gzip_location }}
kumo-location = {{ kumo_location }}
logrotate-location = {{ logrotate_location }}
[dynamic-template-kumofs]
<= jinja2-template-base
url = {{ template_kumofs }}
filename = instance-kumofs.cfg
extra-context =
section parameter_dict dynamic-template-kumofs-parameters
[dynamic-template-mariadb-parameters]
<= default-dynamic-template-parameters
bash = {{ bash_location }}
coreutils-location = {{ coreutils_location }}
dash-location = {{ dash_location }}
findutils-location = {{ findutils_location }}
gzip-location = {{ gzip_location }}
xz-utils-location = {{ xz_utils_location }}
mariadb-location = {{ mariadb_location }}
template-my-cnf = {{ template_my_cnf }}
template-mariadb-initial-setup = {{ template_mariadb_initial_setup }}
template-mysqld-wrapper = {{ template_mysqld_wrapper }}
link-binary = {{ dumps(mariadb_link_binary) }}
mariadb-resiliency-after-import-script = {{ mariadb_resiliency_after_import_script }}
mariadb-slow-query-report-script = {{ mariadb_slow_query_report_script }}
mariadb-start-clone-from-backup = {{ mariadb_start_clone_from_backup }}
promise-check-slow-queries-digest-result = {{ bin_directory }}/check-slow-queries-digest-result
percona-tools-location = {{ percona_toolkit_location }}
unixodbc-location = {{ unixodbc_location }}
mroonga-mariadb-install-sql = {{ mroonga_mariadb_install_sql }}
mroonga-mariadb-plugin-dir = {{ mroonga_mariadb_plugin_dir }}
groonga-plugins-path = {{ groonga_plugin_dir }}:{{ groonga_mysql_normalizer_plugin_dir }}
[dynamic-template-mariadb]
<= jinja2-template-base
url = {{ template_mariadb }}
filename = instance-mariadb.cfg
extra-context =
section parameter_dict dynamic-template-mariadb-parameters
# Keep a section for backward compatibility for removed types
# Once the section is removed, ghost instances will keep failing until
# garbage collection be implemented.
[dynamic-template-legacy]
recipe = collective.recipe.template
input = inline:[buildout]
eggs-directory = ${buildout:eggs-directory}
develop-eggs-directory = ${buildout:develop-eggs-directory}
offline = true
parts =
output = ${directory:directory/instance-legacy.cfg
mode = 644
# we need this value to be present in a section,
# for slapos.cookbook:switch-softwaretype to work
[dynamic-template-jupyter]
output = {{ template_jupyter_cfg }}
[dynamic-template-wcfs]
<= jinja2-template-base
url = {{ instance_wcfs_cfg_in }}
filename = instance_wcfs.cfg
extra-context =
section parameter_dict dynamic-template-zope-parameters
import urllib_parse six.moves.urllib.parse
import-list =
file instance_zodb_base context:template-zodb-base
[switch-softwaretype]
recipe = slapos.cookbook:switch-softwaretype
override = {{ dumps(override_switch_softwaretype |default) }}
# Public software types
default = dynamic-template-erp5:output
# BBB
RootSoftwareInstance = ${:default}
# Internal software types
kumofs = dynamic-template-kumofs:output
caucase = dynamic-template-caucase:output
mariadb = dynamic-template-mariadb:output
balancer = dynamic-template-balancer:output
postfix = dynamic-template-postfix:output
zodb-zeo = dynamic-template-zeo:output
zodb-neo = neo:output
zope = dynamic-template-zope:output
jupyter = dynamic-template-jupyter:output
wcfs = dynamic-template-wcfs:output
# Keep cloudooo backward compatibility
cloudooo = dynamic-template-legacy:output
caucase = dynamic-template-legacy:output
USE mysql;
{% set mroonga = parameter_dict.get('mroonga', 'ha_mroonga.so') -%}
{% if mroonga %}
SOURCE {{ parameter_dict['mroonga-mariadb-install-sql'] }};
{% endif %}
DROP FUNCTION IF EXISTS sphinx_snippets;
{% macro database(name, user, password, with_process_privilege) -%}
CREATE DATABASE IF NOT EXISTS `{{ name }}`;
{% if user -%}
GRANT ALL PRIVILEGES ON `{{ name }}`.* TO `{{ user }}`@`%` IDENTIFIED BY '{{ password }}';
GRANT ALL PRIVILEGES ON `{{ name }}`.* TO `{{ user }}`@localhost IDENTIFIED BY '{{ password }}';
{% if with_process_privilege %}
GRANT PROCESS ON *.* TO `{{ user }}`@`%` IDENTIFIED BY '{{ password }}';
GRANT PROCESS ON *.* TO `{{ user }}`@localhost IDENTIFIED BY '{{ password }}';
{%- endif %}
{%- endif %}
{% endmacro -%}
{% for entry in parameter_dict['database-list'] -%}
{{ database(entry['name'], entry.get('user'), entry.get('password'), entry.get('with-process-privilege')) }}
{% endfor -%}
{% set socket = parameter_dict['socket'] -%}
# ERP5 buildout my.cnf template based on my-huge.cnf shipped with mysql
# The MySQL server
[mysqld]
# ERP5 by default requires InnoDB storage. MySQL by default fallbacks to using
# different engine, like MyISAM. Such behaviour generates problems only, when
# tables requested as InnoDB are silently created with MyISAM engine.
#
# Loud fail is really required in such case.
sql_mode="NO_ENGINE_SUBSTITUTION"
skip_show_database
{% set ip = parameter_dict.get('ip') -%}
{% if ip -%}
bind_address = {{ ip }}
port = {{ parameter_dict['port'] }}
{% else -%}
skip_networking
{% endif -%}
socket = {{ socket }}
datadir = {{ parameter_dict['data-directory'] }}
tmpdir = {{ parameter_dict['tmp-directory'] }}
pid_file = {{ parameter_dict['pid-file'] }}
log_error = {{ parameter_dict['error-log'] }}
slow_query_log
slow_query_log_file = {{ parameter_dict['slow-query-log'] }}
long_query_time = {{ parameter_dict['long-query-time'] }}
max_allowed_packet = 128M
query_cache_size = 32M
innodb_file_per_table = {{ parameter_dict['innodb-file-per-table'] }}
default_time_zone = '+00:00'
plugin_load = ha_mroonga
plugin-dir = {{ parameter_dict['plugin-directory'] }}
max_connections = {{ parameter_dict['max-connection-count'] }}
{% set innodb_buffer_pool_size = parameter_dict['innodb-buffer-pool-size'] -%}
{% if innodb_buffer_pool_size %}innodb_buffer_pool_size = {{ innodb_buffer_pool_size }}{% endif %}
{% set innodb_buffer_pool_instances = parameter_dict['innodb-buffer-pool-instances'] -%}
{% if innodb_buffer_pool_instances %}innodb_buffer_pool_instances = {{ innodb_buffer_pool_instances }}{% endif %}
{% set innodb_log_file_size = parameter_dict['innodb-log-file-size'] -%}
{% if innodb_log_file_size %} innodb_log_file_size = {{ innodb_log_file_size }}{% endif %}
{% set innodb_log_buffer_size = parameter_dict['innodb-log-buffer-size'] -%}
{% if innodb_log_buffer_size %} innodb_log_buffer_size = {{ innodb_log_buffer_size }}{% endif %}
# very important to allow parallel indexing
# Note: this is compatible with binlog-based incremental backups, because ERP5
# doesn't use "insert ... select" (in any number of queries) pattern.
innodb_locks_unsafe_for_binlog = 1
{% set log_bin = parameter_dict['binlog-path'] -%}
{% if log_bin -%}
log_bin = {{ log_bin }}
{% set binlog_expire_days = parameter_dict['binlog-expire-days'] -%}
{% if binlog_expire_days > 0 %}expire_logs_days = {{ binlog_expire_days }}{% endif %}
server_id = {{ parameter_dict['server-id'] }}
# Note: increases writes on replica (it writes to relay log and to database as
# usual, but then also to binlog), but with the advantage of allowing such node
# to switch role later, or to allow another replica to catch up with it if the
# primary is not available anymore. This also increases the disk usage on the
# replica, as the binlogs are stored as usual.
log_slave_updates = 1
# Note: as of at least mariadb 10.3.27, the relay log name is not based on the
# hostname, but still there is a line in mariadb_error.log on every start
# warning about this missing option. So provide it.
relay-log = mariadb-relay-bin
{% endif %}
# Some dangerous settings you may want to uncomment temporarily
# if you only want performance or less disk access.
{% set x = '' if parameter_dict['relaxed-writes'] else '#' -%}
{{x}}innodb_flush_log_at_trx_commit = 0
{{x}}innodb_flush_method = nosync
{{x}}innodb_doublewrite = 0
{{x}}sync_frm = 0
character_set_server = {{ parameter_dict['character-set-server'] }}
collation_server = {{ parameter_dict['collation-server'] }}
skip_character_set_client_handshake
{% if 'ssl-key' in parameter_dict -%}
ssl_cert = {{ parameter_dict['ssl-crt'] }}
ssl_key = {{ parameter_dict['ssl-key'] }}
{% if 'ssl-ca-crt' in parameter_dict -%}
ssl_ca = {{ parameter_dict['ssl-ca-crt'] }}
{%- endif %}
{% if 'ssl-crl' in parameter_dict -%}
ssl_crl = {{ parameter_dict['ssl-crl'] }}
{%- endif %}
{% if 'ssl-cipher' in parameter_dict -%}
ssl_cipher = {{ parameter_dict['ssl-cipher'] }}
{%- endif %}
{%- endif %}
[client]
socket = {{ socket }}
user = root
[mysql]
no_auto_rehash
[mysqlhotcopy]
interactive_timeout
[mysqldump]
max_allowed_packet = 128M
#!{{dash}}
# BEWARE: This file is operated by slapos node
set -e
SLOW_QUERY_PATH='{{slow_query_path}}'
OUTPUT_FOLDER='{{output_folder}}'
PT_QUERY_EXEC='{{pt_query_exec}}'
if [ ! -d "$OUTPUT_FOLDER" ]; then
echo "ERROR: output_folder don't exists"
exit 1
fi
dashed_today=$(date +%Y-%m-%d)
today=$(date -d "$dashed_today" +%Y%m%d)
SLOW_LOG="$SLOW_QUERY_PATH-$today"
OUTPUT_FILE="$OUTPUT_FOLDER/slowquery_digest.txt-$dashed_today"
if [ ! -f "$SLOW_LOG" ]; then
echo "ERROR: cannot read mysql slow query log file $SLOW_LOG. Exiting."
exit 1
fi
"$PT_QUERY_EXEC" "$SLOW_LOG" > "$OUTPUT_FILE"
{{ xz }} -9 "$OUTPUT_FILE"
# See http://www.postfix.org/aliases.5.html for format
{% for name, alias_list in alias_dict.items() -%}
{{ name }}: {{ alias_list | join(', ') }}
{% endfor %}
# http://www.postfix.org/STANDARD_CONFIGURATION_README.html
# http://www.postfix.org/postconf.5.html
queue_directory = {{ queue_directory }}
command_directory = {{ bin_directory }}
daemon_directory = {{ usr_directory }}/libexec/postfix
data_directory = {{ data_directory }}
mail_owner = {{ mail_owner }}
alias_maps = {{ aliases }}
alias_database = {{ aliases }}
mail_spool_directory = {{ spool_directory }}
sendmail_path = {{ bin_directory }}/sendmail
newaliases_path = {{ bin_directory }}/newaliases
mailq_path = {{ bin_directory }}/mailq
setgid_group = {{ setgid_group }}
html_directory = no
manpage_directory = {{ postfix_location }}/usr/local/man
sample_directory = {{ postfix_location }}/etc/postfix
readme_directory = no
inet_interfaces = {{ inet_interfaces }}
smtp_bind_address = 0.0.0.0
smtp_bind_address6 = ::
compatibility_level = 3.6
smtputf8_enable = no
# Compared to default:
# - remove X-related variables, irrelevant for slapos, to be concise
# - add SASL_CONF_PATH to have per-partition cyrus-sasl configuration
import_environment =
MAIL_CONFIG MAIL_DEBUG MAIL_LOGTAG TZ LANG=C
SASL_CONF_PATH
# Mandatory sasl auth over TLS
# XXX: no man-in-the-middle protection
smtpd_tls_cert_file = {{ cert }}
smtpd_tls_key_file = {{ key }}
smtpd_tls_dh512_param_file = {{ dh_512 }}
{#
Note: 1024 vs. 2048 is not a typo, but what is actually recommended in
postfix documentation
-#}
smtpd_tls_dh1024_param_file = {{ dh_2048 }}
smtpd_tls_security_level = encrypt
smtpd_sasl_auth_enable = yes
# Reject as many bogus cases as soon as possible, so errors are visible to ERP5
# developper rather than relying on bounces.
smtpd_recipient_restrictions =
reject_non_fqdn_recipient
reject_unknown_recipient_domain
permit_sasl_authenticated
reject
# Do not allow mynetworks to send mails, only authenticated clients.
smtpd_relay_restrictions =
permit_sasl_authenticated
defer_unauth_destination
# We do not pass mail address in command lines, so accept those starting with
# a dash.
allow_min_user = yes
# Disable local delivery
local_transport = error
smtpd_milters ={{ '\n '.join(milter_list) }}
{% if relayhost -%}
relayhost = {{ relayhost }}
smtp_tls_security_level = encrypt
smtp_tls_session_cache_database = btree:{{ data_directory }}/smtp_scache
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = {{ sasl_passwd }}
smtp_sasl_tls_security_options = noanonymous
{%- endif %}
maillog_file = {{ log_directory }}/postfix.log
maillog_file_compressor = {{ xz_utils_location }}/bin/xz
maillog_file_prefixes = {{ log_directory }}
# http://www.postfix.org/master.5.html
# ==========================================================================
# service type private unpriv chroot wakeup maxproc command + args
# (yes) (yes) (yes) (never) (100)
# ==========================================================================
{{ smtp }} inet n - n - - smtpd
pickup unix n - n 60 1 pickup
cleanup unix n - n - 0 cleanup
qmgr unix n - n 300 1 qmgr
tlsmgr unix - - n 1000? 1 tlsmgr
rewrite unix - - n - - trivial-rewrite
bounce unix - - n - 0 bounce
defer unix - - n - 0 bounce
trace unix - - n - 0 bounce
verify unix - - n - 1 verify
flush unix n - n 1000? 0 flush
proxymap unix - - n - - proxymap
proxywrite unix - - n - 1 proxymap
smtp unix - - n - - smtp
relay unix - - n - - smtp
showq unix n - n - - showq
error unix - - n - - error
retry unix - - n - - error
discard unix - - n - - discard
local unix - n n - - local
virtual unix - n n - - virtual
lmtp unix - - n - - lmtp
anvil unix - - n - 1 anvil
scache unix - - n - 1 scache
postlog unix-dgram n - n - 1 postlogd
module(
load="imuxsock"
SysSock.Name="{{ parameter_dict['log-socket'] }}")
# Just simply output the raw line without any additional information, as
# haproxy emits enough information by itself
# Also cut out first empty space in msg, which is related to rsyslogd
# internal and end up cutting on 8k, as it's default of $MaxMessageSize
template(name="rawoutput" type="string" string="%msg:2:8192%\n")
$FileCreateMode 0644
$DirCreateMode 0700
$Umask 0022
$WorkDirectory {{ parameter_dict['spool-directory'] }}
local0.=info {{ parameter_dict['access-log-file'] }};rawoutput
local0.warning {{ parameter_dict['error-log-file'] }}
#!{{ bin_path }}
# BEWARE: This file is operated by slapos node
# BEWARE: It will be overwritten automatically
import argparse, os, sys, traceback
from erp5.util import taskdistribution
from lxml import etree
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import six
import time
from urllib import urlopen
import json
def main():
parser = argparse.ArgumentParser(description='Run a test suite.')
parser_configuration = {{ repr(configuration) }}
parser.add_argument('--test_suite', help='The test suite name')
parser.add_argument('--test_suite_title', help='The test suite title')
parser.add_argument('--test_node_title', help='The test node title')
parser.add_argument('--project_title', help='The project title')
parser.add_argument('--revision', help='The revision to test',
default='dummy_revision')
parser.add_argument('--node_quantity', help='ignored', type=int)
parser.add_argument('--master_url',
help='The Url of Master controling many suites')
args = parser.parse_args()
test_line_dict = {}
test_suite_title = args.test_suite_title or args.test_suite
test_suite = args.test_suite
revision = args.revision
if not parser_configuration['server-url'] or not parser_configuration['remote-access-url']:
sys.exit(-1)
if parser_configuration['run-only']:
url = "%s/erp5/portal_tests/%s/core/TestRunner.html" \
"?test=../test_suite_html" \
"&auto=on" \
"&resultsUrl=../getId" \
"&__ac_name=%s" \
"&__ac_password=%s" % (parser_configuration['remote-access-url'], parser_configuration['run-only'], {{ repr(user) }}, {{ repr(password) }})
else:
url = "%s/erp5/portal_tests/core/TestRunner.html" \
"?test=../test_suite_html" \
"&auto=on" \
"&resultsUrl=../getId" \
"&__ac_name=%s" \
"&__ac_password=%s" % (parser_configuration['remote-access-url'], {{ repr(user) }}, {{ repr(password) }})
# Wait until all activities are finished...
wait_url = "%s/erp5/ActivityTool_getSqlStatisticList" \
"?__ac_name=%s" \
"&__ac_password=%s" % (parser_configuration['remote-access-url'], {{ repr(user) }}, {{ repr(password) }})
while 1:
try:
response = urlopen(wait_url)
try:
if response.code == 500:
sys.exit(-1)
if response.code == 200:
static_dict = json.loads(response.read())
activity_list = []
for _, value in six.iteritems(static_dict):
activity_list += value['line_list']
if len(activity_list) == 0:
break
elif all(x['node'] == -2 for x in activity_list):
sys.exit(-1)
finally:
response.close()
except Exception:
traceback.print_exc()
time.sleep(600)
# Unsubscribe activity
# Launch Zuite_waitForActivities with activity suscribed may create conflict
activity_unsubscribe_url = "%s/erp5/portal_activities/unsubscribe" \
"?__ac_name=%s" \
"&__ac_password=%s" % (parser_configuration['remote-access-url'], {{ repr(user) }}, {{ repr(password) }})
print activity_unsubscribe_url
try:
response = urlopen(activity_unsubscribe_url)
try:
if response.code != 200:
sys.exit(-1)
finally:
response.close()
except Exception:
traceback.print_exc()
tool = taskdistribution.TaskDistributor(portal_url=args.master_url)
browser = webdriver.Remote(parser_configuration['server-url'] , parser_configuration['desired-capabilities'] )
try:
agent = browser.execute_script("return navigator.userAgent")
print url
print agent
start_time = time.time()
browser.get(url)
# Wait for Zelenium to be loaded
WebDriverWait(browser, 10).until(EC.presence_of_element_located((
By.XPATH, '//iframe[@id="testSuiteFrame"]'
)))
# XXX No idea how to wait for the iframe content to be loaded
time.sleep(5)
# Count number of test to be executed
test_count = browser.execute_script(
"return document.getElementById('testSuiteFrame').contentDocument.querySelector('tbody').children.length"
) - 1 # First child is the file name
# Wait for test to be executed
while 1:
try:
WebDriverWait(browser, parser_configuration['max-duration']).until(EC.presence_of_element_located((
By.XPATH, '//td[@id="testRuns" and contains(text(), "%i")]' % test_count
)))
break
except TimeoutException as error:
raise error
except Exception:
traceback.print_exc()
except:
test_line_dict['UnexpectedException'] = {
'test_count': 1,
'error_count': 0,
'failure_count': 1,
'skip_count': 0,
'duration': 1,
'command': url,
'stdout': agent,
'stderr': traceback.format_exc()
}
finally:
execution_duration = round(time.time() - start_time, 2)
if test_count:
test_execution_duration = execution_duration / test_count
else:
test_execution_duration = 0
html_parser = etree.HTMLParser(recover=True)
# body = etree.fromstring(browser.page_source.encode('UTF-8'), html_parser)
# test_count = int(body.xpath('//td[@id="testRuns"]')[0].text)
# failed_test_count = int(body.xpath('//td[@id="testFailures"]')[0].text)
# print 'Run %i, failed %i' % (test_count, failed_test_count)
# https://github.com/appium/appium/issues/5199
# browser.switch_to.frame(browser.find_element_by_id("testSuiteFrame"))
# iframe = etree.fromstring(browser.page_source.encode('UTF-8'), html_parser)
iframe = etree.fromstring(
browser.execute_script(
"return document.getElementById('testSuiteFrame').contentDocument.querySelector('html').innerHTML"
).encode('UTF-8'),
html_parser
)
# parse test result
tbody = iframe.xpath('.//body/table/tbody')[0]
for tr in tbody[1:]:
# First td is the main title
test_name = tr[0][0].text
if len(tr) == 1:
test_line_dict[test_name] = {
'test_count': 1,
'error_count': 0,
'failure_count': 1,
'skip_count': 0,
'duration': 0,
'command': url,
'stdout': agent,
'stderr': '',
'html_test_result': 'Not Executed due to selenium server related error'
}
else:
skip_count = error_count = 0
test_table = tr[1].xpath('.//table')[0]
status = tr.attrib.get('class')
if status is None or 'status_done' in status:
skip_count = 1
elif 'status_failed' in status:
error_count = 1
test_line_dict[test_name] = {
'test_count': 1,
'error_count': error_count,
'failure_count': 0,
'skip_count': skip_count,
'duration': test_execution_duration,
'command': url,
'stdout': agent,
'stderr': '',
'html_test_result': etree.tostring(test_table)
}
browser.quit()
try:
test_result = tool.createTestResult(revision = revision,
test_name_list = list(test_line_dict),
node_title = args.test_node_title,
test_title = test_suite_title,
project_title = args.project_title)
if test_result is None or not hasattr(args, 'master_url'):
return
# report test results
while 1:
test_result_line = test_result.start()
if not test_result_line:
print 'No test result anymore.'
break
print 'Submitting: "%s"' % test_result_line.name
# report status back to Nexedi ERP5
test_result_line.stop(**test_line_dict[test_result_line.name])
except:
# Catch any exception here, to warn user instead of being silent,
# by generating fake error result
traceback.print_exc()
result = dict(status_code=-1,
command=url,
stderr=traceback.format_exc(),
stdout='')
# XXX: inform test node master of error
raise EnvironmentError(result)
if __name__ == "__main__":
main()
<configure
xmlns="http://namespaces.zope.org/zope"
xmlns:meta="http://namespaces.zope.org/meta"
xmlns:five="http://namespaces.zope.org/five">
<include package="Products.Five" />
<meta:redefinePermission from="zope2.Public" to="zope.Public" />
<!-- Load the meta -->
<include files="package-includes/*-meta.zcml" />
<five:loadProducts file="meta.zcml"/>
<!-- Load the configuration -->
<include files="package-includes/*-configure.zcml" />
<five:loadProducts />
<!-- Load the configuration overrides-->
<includeOverrides files="package-includes/*-overrides.zcml" />
<five:loadProductsOverrides />
<securityPolicy
component="AccessControl.security.SecurityPolicy" />
</configure>
[versions]
Zope2 = 2.13.30
AccessControl = 2.13.16
Acquisition = 2.13.12
DateTime = 2.12.8
DocumentTemplate = 2.13.6
ExtensionClass = 2.13.2
Jinja2 = 2.8.1
MarkupSafe = 1.1.1
Missing = 2.13.1
MultiMapping = 2.13.0
Paste = 1.7.5.1
PasteDeploy = 1.3.4
PasteScript = 1.7.5
Persistence = 2.13.2
Products.BTreeFolder2 = 2.13.5
Products.ExternalMethod = 2.13.1
Products.MIMETools = 2.13.0
Products.MailHost = 2.13.4
Products.OFSP = 2.13.2
Products.PythonScripts = 2.13.2
Products.Sessions = 3.0
Products.StandardCacheManagers = 2.13.1
Products.TemporaryFolder = 3.0
Products.ZCTextIndex = 2.13.5
Products.ZCatalog = 2.13.30
Record = 2.13.0
RestrictedPython = 3.6.0
Sphinx = 1.0.8
ZConfig = 2.9.3
ZODB3 = 3.10.7
ZServer = 3.0
ZopeUndo = 2.12.0
appdirs = 1.4.3
configparser = 4.0.2
contextlib2 = 0.6.0.post1
distlib = 0.3.0
docutils = 0.12
filelock = 3.0.12
importlib-metadata = 1.3.0
importlib-resources = 1.0.2
initgroups = 2.13.0
mechanize = 0.2.5
more-itertools = 5.0.0
mr.developer = 1.34
packaging = 20.1
pathlib2 = 2.3.5
pluggy = 0.13.1
py = 1.8.1
pyparsing = 2.4.6
pytz = 2017.2
repoze.retry = 1.2
repoze.tm2 = 1.0
repoze.who = 2.0
scandir = 1.10.0
six = 1.14.0
tempstorage = 2.12.2
toml = 0.10.0
tox = 3.14.4
transaction = 1.1.1
typing = 3.7.4.1
virtualenv = 20.0.4
z3c.checkversions = 1.1
zExceptions = 2.13.0
zLOG = 2.11.2
zc.buildout = 2.3.1
zc.lockfile = 1.0.2
zc.recipe.egg = 2.0.5
zc.recipe.testrunner = 1.2.1
zdaemon = 2.0.7
zipp = 0.6.0
zope.annotation = 3.5.0
zope.broken = 3.6.0
zope.browser = 1.3
zope.browsermenu = 3.9.1
zope.browserpage = 3.12.2
zope.browserresource = 3.10.3
zope.component = 3.9.5
zope.configuration = 3.7.4
zope.container = 3.11.2
zope.contentprovider = 3.7.2
zope.contenttype = 3.5.5
zope.deferredimport = 3.5.3
zope.dottedname = 3.4.6
zope.event = 3.5.2
zope.exceptions = 3.6.2
zope.filerepresentation = 3.6.1
zope.i18n = 3.7.4
zope.i18nmessageid = 3.5.3
zope.interface = 3.6.8
zope.lifecycleevent = 3.6.2
zope.location = 3.9.1
zope.pagetemplate = 3.5.2
zope.processlifetime = 1.0
zope.proxy = 3.6.1
zope.ptresource = 3.9.0
zope.publisher = 3.12.6
zope.schema = 3.7.1
zope.security = 3.7.4
zope.sendmail = 3.7.5
zope.sequencesort = 3.4.0
zope.site = 3.9.2
zope.size = 3.4.1
zope.structuredtext = 3.5.1
zope.tal = 3.5.2
zope.tales = 3.5.3
zope.testbrowser = 3.11.1
zope.testing = 3.9.7
zope.traversing = 3.13.2
zope.viewlet = 3.7.2
{% set slapparameter_dict = {} %}{# dummy -#}
{% import "root_common" as root_common with context -%}
{% set node_id = parameter_dict['node-id'] -%}
# Note: Environment is setup in running wrapper script, as zope.conf is read
# too late for some components.
%define INSTANCE {{ parameter_dict['instance'] }}
instancehome $INSTANCE
zserver-threads {{ parameter_dict['thread-amount'] }}
# When ownership checking is enabled, the roles a script runs as are the
# intersection between user's roles and script owner's roles. This means
# that revoking a code author's access to the system prevent all scripts
# owned by that user from being of much use.
# This is not how ERP5 approaches development: Managers write code,
# Managers must be trustable and trusted, and their past work should not be
# revoked when their account is terminated.
skip-ownership-checking true
lock-filename {{ parameter_dict['lock-file'] }}
pid-filename {{ parameter_dict['pid-file'] }}
default-zpublisher-encoding utf-8
rest-input-encoding utf-8
rest-output-encoding utf-8
# XXX: isn't this entry implicit ?
products {{ parameter_dict['instance-products'] }}
# Magic parameter to use the first entry of X-Forwarded-For as the source IP address.
# (see monkey patches in ERP5Type/patches/HTTPRequest.py and ERP5Type/patches/http_server.py)
# * Frontend HTTP server should drop incoming X-Forwarded-For.
# * Communication between frontend and backend should use SSL Client Authentication.
# * Backend proxy drops incoming X-Forwarded-For without valid SSL Client Authentification.
trusted-proxy 0.0.0.0
{% if not parameter_dict['wsgi'] -%}
{% if parameter_dict['webdav'] -%}
<webdav-source-server>
address {{ parameter_dict['ip'] }}:{{ parameter_dict['port'] }}
force-connection-close off
</webdav-source-server>
{% else %}
<http-server>
address {{ parameter_dict['ip'] }}:{{ parameter_dict['port'] }}
</http-server>
{% endif %}
{%- endif %}
<zoperunner>
program $INSTANCE/bin/runzope
</zoperunner>
<product-config DeadlockDebugger>
dump_url {{ parameter_dict['deadlock-path'] }}
secret {{ parameter_dict['deadlock-debugger-password'] }}
</product-config>
{% if 'longrequest-logger-interval' in parameter_dict -%}
<product-config LongRequestLogger>
logfile {{ parameter_dict['longrequest-logger-file'] }}
timeout {{ parameter_dict['longrequest-logger-timeout'] }}
interval {{ parameter_dict['longrequest-logger-interval'] }}
</product-config>
{% endif -%}
{% if 'large-file-threshold' in parameter_dict -%}
large-file-threshold {{ parameter_dict['large-file-threshold'] }}
{% endif -%}
{% if 'tidstorage-ip' in parameter_dict -%}
<product-config TIDStorage>
backend-ip {{ parameter_dict['tidstorage-ip'] }}
backend-port {{ parameter_dict['tidstorage-port'] }}
</product-config>
{% endif -%}
<product-config CMFActivity>
node-id {{ node_id }}
</product-config>
{% if not parameter_dict['wsgi'] -%}
{% set timerserver_interval = parameter_dict['timerserver-interval'] -%}
{% if timerserver_interval -%}
%import Products.TimerService.timerserver
<timer-server>
interval {{ timerserver_interval }}
</timer-server>
{% endif %}
{% endif -%}
{% set sql_connection_string = parameter_dict.get('sql-connection-string') -%}
{% if sql_connection_string -%}
{% set bt5_repository_url = [] -%}
{% set sr_path = os.path.realpath(parameter_dict['home'] + '/software_release') + '/' -%}
{% for url in parameter_dict['bt5-repository-url'].split() -%}
{% if url.startswith(sr_path) -%}
{% set url = '~/software_release/' + url[len(sr_path):] -%}
{% endif -%}
{% do bt5_repository_url.append(url) -%}
{% endfor -%}
<product-config initsite>
id {{ parameter_dict['site-id'] }}
owner {{ parameter_dict['inituser'] }}
erp5_sql_connection_string {{ sql_connection_string }}
cmf_activity_sql_connection_string {{ sql_connection_string }}
bt5_repository_url {{ ' '.join(bt5_repository_url) }}
bt5 {{ parameter_dict['bt5'] }}
{%- if parameter_dict['id-store-interval'] != None %}
id_store_interval {{ parameter_dict['id-store-interval'] }}
{%- endif %}
cloudooo_url {{ parameter_dict['cloudooo-url'] }}
</product-config>
{% endif -%}
<eventlog>
level info
<logfile>
dateformat
path {{ parameter_dict['event-log'] }}
</logfile>
</eventlog>
<logger access>
level WARN
<logfile>
dateformat
format %(message)s
path {{ parameter_dict['z2-log'] }}
</logfile>
</logger>
<zodb_db temporary>
<temporarystorage>
name temporary storage for sessioning
</temporarystorage>
mount-point /temp_folder
container-class Products.TemporaryFolder.TemporaryContainer
</zodb_db>
{% set developer_list = parameter_dict['developer-list'] -%}
{% set publisher_timeout = parameter_dict['publisher-timeout'] -%}
{% set activity_timeout = parameter_dict['activity-timeout'] -%}
{% if developer_list or publisher_timeout or activity_timeout -%}
%import Products.ERP5Type
<ERP5Type erp5>
{% if developer_list %}developers {{ developer_list | join(' ') }}{% endif %}
{% if publisher_timeout %}publisher-timeout {{ publisher_timeout }}{% endif %}
{% if activity_timeout %}activity-timeout {{ activity_timeout }}{% endif %}
</ERP5Type>
{% endif -%}
{% for m in parameter_dict['import-list'] -%}
%import {{ m }}
{% endfor -%}
{% set type_dict = {'neo': 'NEOStorage', 'zeo': 'zeoclient'} %}
{% for name, zodb_dict in six.iteritems(parameter_dict['zodb-dict']) %}
<zodb_db {{ name }}>
{%- set storage_type = type_dict[zodb_dict.pop('type')] %}
{%- set storage_dict = zodb_dict.pop('storage-dict') %}
{%- do root_common.apply_overrides(zodb_dict, node_id) %}
{%- for key, value in six.iteritems(zodb_dict) %}
{{ key }} {{ value }}
{%- endfor %}
<{{ storage_type }}>
{%- do root_common.apply_overrides(storage_dict, node_id) %}
{%- for key, value in six.iteritems(storage_dict) %}
{{ key }} {{ value }}
{%- endfor %}
</{{ storage_type }}>
</zodb_db>
{% endfor -%}
[versions]
# ZTK
zope.annotation = 3.5.0
zope.applicationcontrol = 3.5.5
zope.authentication = 3.7.1
zope.broken = 3.6.0
zope.browser = 1.3
zope.browsermenu = 3.9.1
zope.browserpage = 3.12.2
zope.browserresource = 3.10.3
zope.cachedescriptors = 3.5.1
zope.catalog = 3.8.2
zope.component = 3.9.5
zope.componentvocabulary = 1.0.1
zope.configuration = 3.7.4
zope.container = 3.11.2
zope.contentprovider = 3.7.2
zope.contenttype = 3.5.5
zope.copy = 3.5.0
zope.copypastemove = 3.7.0
zope.datetime = 3.4.1
zope.deferredimport = 3.5.3
zope.deprecation = 3.4.1
zope.dottedname = 3.4.6
zope.dublincore = 3.7.1
zope.error = 3.7.4
zope.event = 3.5.2
zope.exceptions = 3.6.2
zope.filerepresentation = 3.6.1
zope.formlib = 4.0.6
zope.hookable = 3.4.1
zope.i18n = 3.7.4
zope.i18nmessageid = 3.5.3
zope.index = 3.6.4
zope.interface = 3.6.7
zope.intid = 3.7.2
zope.keyreference = 3.6.4
zope.lifecycleevent = 3.6.2
zope.location = 3.9.1
zope.login = 1.0.0
zope.mimetype = 1.3.1
zope.minmax = 1.1.2
zope.pagetemplate = 3.5.2
zope.password = 3.6.1
zope.pluggableauth = 1.0.3
zope.principalannotation = 3.6.1
zope.principalregistry = 3.7.1
zope.processlifetime = 1.0
zope.proxy = 3.6.1
zope.ptresource = 3.9.0
zope.publisher = 3.12.6
zope.ramcache = 1.0
zope.schema = 3.7.1
zope.security = 3.7.4
zope.securitypolicy = 3.7.0
zope.sendmail = 3.7.5
zope.sequencesort = 3.4.0
zope.server = 3.6.3
zope.session = 3.9.5
zope.site = 3.9.2
zope.size = 3.4.1
zope.structuredtext = 3.5.1
zope.tal = 3.5.2
zope.tales = 3.5.3
zope.testing = 3.9.7
zope.traversing = 3.13.2
zope.viewlet = 3.7.2
# Deprecating
zope.documenttemplate = 3.4.3
# Dependencies
# Needed for the mechanize 0.1.x.
ClientForm = 0.2.10
distribute = 0.6.49
docutils = 0.7
Jinja2 = 2.5.5
# Newer versions of mechanize are not fully py24 compatible.
mechanize = 0.1.11
Paste = 1.7.5.1
PasteDeploy = 1.3.4
PasteScript = 1.7.5
py = 1.3.4
Pygments = 1.3.1
python-gettext = 1.0
pytz = 2013b
RestrictedPython = 3.6.0
setuptools = 12.2
Sphinx = 1.0.8
transaction = 1.1.1
unittest2 = 0.5.1
z3c.recipe.sphinxdoc = 0.0.8
zc.buildout = 2.3.1
zc.lockfile = 1.0.2
ZConfig = 2.8.0
zc.recipe.egg = 1.3.2
zc.recipe.testrunner = 1.2.1
zc.resourcelibrary = 1.3.4
zdaemon = 2.0.7
ZODB3 = 3.9.7
zope.mkzeoinstance = 3.9.6
# toolchain
argparse = 1.1
coverage = 3.5.3
lxml = 2.2.8
mr.developer = 1.25
tl.eggdeps = 0.4
nose = 1.1.2
z3c.checkversions = 0.4.2
z3c.recipe.compattest = 0.12.2
z3c.recipe.depgraph = 0.5
zope.kgs = 1.2.0
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment