# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2015-2018, OpenStack contributors
# This file is distributed under the same license as the openstackhaguide package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: openstackhaguide \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2018-09-21 07:53+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"

#: ../appendix.rst:2
msgid "Appendix"
msgstr ""

#: ../compute-node-ha.rst:3
msgid "Configuring the compute node"
msgstr ""

#: ../compute-node-ha.rst:5
msgid ""
"The `Installation Guides <https://docs.openstack.org/ocata/install/>`_ "
"provide instructions for installing multiple compute nodes. To make the "
"compute nodes highly available, you must configure the environment to "
"include multiple instances of the API and other services."
msgstr ""

#: ../compute-node-ha.rst:12
msgid "Configuring high availability for instances"
msgstr ""

#: ../compute-node-ha.rst:14
msgid ""
"As of September 2016, the OpenStack High Availability community is designing "
"and developing an official and unified way to provide high availability for "
"instances. We are developing automatic recovery from failures of hardware or "
"hypervisor-related software on the compute node, or other failures that "
"could prevent instances from functioning correctly, such as, issues with a "
"cinder volume I/O path."
msgstr ""

#: ../compute-node-ha.rst:21
msgid ""
"More details are available in the `user story <https://specs.openstack.org/"
"openstack/openstack-user-stories/user-stories/proposed/ha_vm.html>`_ co-"
"authored by OpenStack's HA community and `Product Working Group <https://"
"wiki.openstack.org/wiki/ProductTeam>`_ (PWG), where this feature is "
"identified as missing functionality in OpenStack, which should be addressed "
"with high priority."
msgstr ""

#: ../compute-node-ha.rst:29
msgid "Existing solutions"
msgstr ""

#: ../compute-node-ha.rst:31
msgid ""
"The architectural challenges of instance HA and several currently existing "
"solutions were presented in `a talk at the Austin summit <https://www."
"openstack.org/videos/video/high-availability-for-pets-and-hypervisors-state-"
"of-the-nation>`_, for which `slides are also available <http://aspiers."
"github.io/openstack-summit-2016-austin-compute-ha/>`_."
msgstr ""

#: ../compute-node-ha.rst:36
msgid ""
"The code for three of these solutions can be found online at the following "
"links:"
msgstr ""

#: ../compute-node-ha.rst:39
msgid ""
"`a mistral-based auto-recovery workflow <https://github.com/gryf/mistral-"
"evacuate>`_, by Intel"
msgstr ""

#: ../compute-node-ha.rst:41
msgid "`masakari <https://launchpad.net/masakari>`_, by NTT"
msgstr ""

#: ../compute-node-ha.rst:42
msgid ""
"`OCF RAs <https://aspiers.github.io/openstack-summit-2016-austin-compute-ha/"
"#/ocf-pros-cons>`_, as used by Red Hat and SUSE"
msgstr ""

#: ../compute-node-ha.rst:47
msgid "Current upstream work"
msgstr ""

#: ../compute-node-ha.rst:49
msgid ""
"Work is in progress on a unified approach, which combines the best aspects "
"of existing upstream solutions. More details are available on `the HA VMs "
"user story wiki <https://wiki.openstack.org/wiki/ProductTeam/User_Stories/"
"HA_VMs>`_."
msgstr ""

#: ../controller-ha-haproxy.rst:3
msgid "HAProxy"
msgstr ""

#: ../controller-ha-haproxy.rst:5
msgid ""
"HAProxy provides a fast and reliable HTTP reverse proxy and load balancer "
"for TCP or HTTP applications. It is particularly suited for web crawling "
"under very high loads while needing persistence or Layer 7 processing. It "
"realistically supports tens of thousands of connections with recent hardware."
msgstr ""

#: ../controller-ha-haproxy.rst:11
msgid ""
"Each instance of HAProxy configures its front end to accept connections only "
"to the virtual IP (VIP) address. The HAProxy back end (termination point) is "
"a list of all the IP addresses of instances for load balancing."
msgstr ""

#: ../controller-ha-haproxy.rst:17
msgid ""
"Ensure your HAProxy installation is not a single point of failure, it is "
"advisable to have multiple HAProxy instances running."
msgstr ""

#: ../controller-ha-haproxy.rst:20
msgid ""
"You can also ensure the availability by other means, using Keepalived or "
"Pacemaker."
msgstr ""

#: ../controller-ha-haproxy.rst:23
msgid ""
"Alternatively, you can use a commercial load balancer, which is hardware or "
"software. We recommend a hardware load balancer as it generally has good "
"performance."
msgstr ""

#: ../controller-ha-haproxy.rst:27
msgid ""
"For detailed instructions about installing HAProxy on your nodes, see the "
"HAProxy `official documentation <http://www.haproxy.org/#docs>`_."
msgstr ""

#: ../controller-ha-haproxy.rst:31 ../shared-database-manage.rst:179
msgid "Configuring HAProxy"
msgstr ""

#: ../controller-ha-haproxy.rst:33
msgid "Restart the HAProxy service."
msgstr ""

#: ../controller-ha-haproxy.rst:35
msgid ""
"Locate your HAProxy instance on each OpenStack controller node in your "
"environment. The following is an example ``/etc/haproxy/haproxy.cfg`` "
"configuration file. Configure your instance using the following "
"configuration file, you will need a copy of it on each controller node."
msgstr ""

#: ../controller-ha-haproxy.rst:188
msgid ""
"The Galera cluster configuration directive ``backup`` indicates that two of "
"the three controllers are standby nodes. This ensures that only one node "
"services write requests because OpenStack support for multi-node writes is "
"not yet production-ready."
msgstr ""

#: ../controller-ha-haproxy.rst:195
msgid ""
"The Telemetry API service configuration does not have the ``option httpchk`` "
"directive as it cannot process this check properly."
msgstr ""

#: ../controller-ha-haproxy.rst:200
msgid ""
"Configure the kernel parameter to allow non-local IP binding. This allows "
"running HAProxy instances to bind to a VIP for failover. Add following line "
"to ``/etc/sysctl.conf``:"
msgstr ""

#: ../controller-ha-haproxy.rst:208
msgid "Restart the host or, to make changes work immediately, invoke:"
msgstr ""

#: ../controller-ha-haproxy.rst:214
msgid ""
"Add HAProxy to the cluster and ensure the VIPs can only run on machines "
"where HAProxy is active:"
msgstr ""

#: ../controller-ha-haproxy.rst:217 ../controller-ha-pacemaker.rst:589
#: ../controller-ha-pacemaker.rst:625
msgid "``pcs``"
msgstr ""

#: ../controller-ha-haproxy.rst:225 ../controller-ha-pacemaker.rst:580
#: ../controller-ha-pacemaker.rst:619
msgid "``crmsh``"
msgstr ""

#: ../controller-ha-identity.rst:3
msgid "Highly available Identity API"
msgstr ""

#: ../controller-ha-identity.rst:5
msgid ""
"Making the OpenStack Identity service highly available in active and passive "
"mode involves:"
msgstr ""

#: ../controller-ha-identity.rst:8
msgid ":ref:`identity-pacemaker`"
msgstr ""

#: ../controller-ha-identity.rst:9
msgid ":ref:`identity-config-identity`"
msgstr ""

#: ../controller-ha-identity.rst:10
msgid ":ref:`identity-services-config`"
msgstr ""

#: ../controller-ha-identity.rst:15 ../shared-database-manage.rst:9
#: ../storage-ha-image.rst:14
msgid "Prerequisites"
msgstr ""

#: ../controller-ha-identity.rst:17
msgid ""
"Before beginning, ensure you have read the `OpenStack Identity service "
"getting started documentation <https://docs.openstack.org/admin-guide/common/"
"get-started-identity.html>`_."
msgstr ""

#: ../controller-ha-identity.rst:22
msgid "Add OpenStack Identity resource to Pacemaker"
msgstr ""

#: ../controller-ha-identity.rst:24
msgid ""
"The following section(s) detail how to add the OpenStack Identity resource "
"to Pacemaker on SUSE and Red Hat."
msgstr ""

#: ../controller-ha-identity.rst:28
msgid "SUSE"
msgstr ""

#: ../controller-ha-identity.rst:30
msgid ""
"SUSE Enterprise Linux and SUSE-based distributions, such as openSUSE, use a "
"set of OCF agents for controlling OpenStack services."
msgstr ""

#: ../controller-ha-identity.rst:33
msgid ""
"Run the following commands to download the OpenStack Identity resource to "
"Pacemaker:"
msgstr ""

#: ../controller-ha-identity.rst:44
msgid ""
"Add the Pacemaker configuration for the OpenStack Identity resource by "
"running the following command to connect to the Pacemaker cluster:"
msgstr ""

#: ../controller-ha-identity.rst:51 ../storage-ha-file-systems.rst:43
#: ../storage-ha-image.rst:52
msgid "Add the following cluster resources:"
msgstr ""

#: ../controller-ha-identity.rst:61
msgid ""
"This configuration creates ``p_keystone``, a resource for managing the "
"OpenStack Identity service."
msgstr ""

#: ../controller-ha-identity.rst:64
msgid ""
"Commit your configuration changes from the :command:`crm configure` menu "
"with the following command:"
msgstr ""

#: ../controller-ha-identity.rst:71
msgid ""
"The :command:`crm configure` supports batch input. You may have to copy and "
"paste the above lines into your live Pacemaker configuration, and then make "
"changes as required."
msgstr ""

#: ../controller-ha-identity.rst:75
msgid ""
"For example, you may enter ``edit p_ip_keystone`` from the :command:`crm "
"configure` menu and edit the resource to match your preferred virtual IP "
"address."
msgstr ""

#: ../controller-ha-identity.rst:79
msgid ""
"Pacemaker now starts the OpenStack Identity service and its dependent "
"resources on all of your nodes."
msgstr ""

#: ../controller-ha-identity.rst:83
msgid "Red Hat"
msgstr ""

#: ../controller-ha-identity.rst:85
msgid ""
"For Red Hat Enterprise Linux and Red Hat-based Linux distributions, the "
"following process uses Systemd unit files."
msgstr ""

#: ../controller-ha-identity.rst:95
msgid "Configure OpenStack Identity service"
msgstr ""

#: ../controller-ha-identity.rst:97
msgid ""
"Edit the :file:`keystone.conf` file to change the values of the :manpage:"
"`bind(2)` parameters:"
msgstr ""

#: ../controller-ha-identity.rst:106
msgid ""
"The ``admin_bind_host`` parameter lets you use a private network for admin "
"access."
msgstr ""

#: ../controller-ha-identity.rst:109
msgid ""
"To be sure that all data is highly available, ensure that everything is "
"stored in the MySQL database (which is also highly available):"
msgstr ""

#: ../controller-ha-identity.rst:122
msgid ""
"If the Identity service will be sending ceilometer notifications and your "
"message bus is configured for high availability, you will need to ensure "
"that the Identity service is correctly configured to use it. For details on "
"how to configure the Identity service for this kind of deployment, see :doc:"
"`shared-messaging`."
msgstr ""

#: ../controller-ha-identity.rst:131
msgid ""
"Configure OpenStack services to use the highly available OpenStack Identity"
msgstr ""

#: ../controller-ha-identity.rst:133
msgid ""
"Your OpenStack services now point their OpenStack Identity configuration to "
"the highly available virtual cluster IP address."
msgstr ""

#: ../controller-ha-identity.rst:136
msgid ""
"For OpenStack Compute, (if your OpenStack Identity service IP address is "
"10.0.0.11) use the following configuration in the :file:`api-paste.ini` file:"
msgstr ""

#: ../controller-ha-identity.rst:144
msgid "Create the OpenStack Identity Endpoint with this IP address."
msgstr ""

#: ../controller-ha-identity.rst:148
msgid ""
"If you are using both private and public IP addresses, create two virtual IP "
"addresses and define the endpoint. For example:"
msgstr ""

#: ../controller-ha-identity.rst:162
msgid ""
"If you are using the horizon Dashboard, edit the :file:`local_settings.py` "
"file to include the following:"
msgstr ""

#: ../controller-ha-memcached.rst:3 ../intro-ha-arch-pacemaker.rst:178
msgid "Memcached"
msgstr ""

#: ../controller-ha-memcached.rst:5
msgid ""
"Memcached is a general-purpose distributed memory caching system. It is used "
"to speed up dynamic database-driven websites by caching data and objects in "
"RAM to reduce the number of times an external data source must be read."
msgstr ""

#: ../controller-ha-memcached.rst:10
msgid ""
"Memcached is a memory cache demon that can be used by most OpenStack "
"services to store ephemeral data, such as tokens."
msgstr ""

#: ../controller-ha-memcached.rst:13
msgid ""
"Access to Memcached is not handled by HAProxy because replicated access is "
"currently in an experimental state. Instead, OpenStack services must be "
"supplied with the full list of hosts running Memcached."
msgstr ""

#: ../controller-ha-memcached.rst:18
msgid ""
"The Memcached client implements hashing to balance objects among the "
"instances. Failure of an instance impacts only a percentage of the objects "
"and the client automatically removes it from the list of instances. The SLA "
"is several minutes."
msgstr ""

#: ../controller-ha-pacemaker.rst:3
msgid "Pacemaker cluster stack"
msgstr ""

#: ../controller-ha-pacemaker.rst:5
msgid ""
"`Pacemaker <http://clusterlabs.org/>`_ cluster stack is a state-of-the-art "
"high availability and load balancing stack for the Linux platform. Pacemaker "
"is used to make OpenStack infrastructure highly available."
msgstr ""

#: ../controller-ha-pacemaker.rst:11
msgid ""
"It is storage and application-agnostic, and in no way specific to OpenStack."
msgstr ""

#: ../controller-ha-pacemaker.rst:13
msgid ""
"Pacemaker relies on the `Corosync <https://corosync.github.io/corosync/>`_ "
"messaging layer for reliable cluster communications. Corosync implements the "
"Totem single-ring ordering and membership protocol. It also provides UDP and "
"InfiniBand based messaging, quorum, and cluster membership to Pacemaker."
msgstr ""

#: ../controller-ha-pacemaker.rst:19
msgid ""
"Pacemaker does not inherently understand the applications it manages. "
"Instead, it relies on resource agents (RAs) that are scripts that "
"encapsulate the knowledge of how to start, stop, and check the health of "
"each application managed by the cluster."
msgstr ""

#: ../controller-ha-pacemaker.rst:24
msgid ""
"These agents must conform to one of the `OCF <https://github.com/"
"ClusterLabs/ OCF-spec/blob/master/ra/resource-agent-api.md>`_, `SysV Init "
"<http://refspecs.linux-foundation.org/LSB_3.0.0/LSB-Core-generic/ LSB-Core-"
"generic/iniscrptact.html>`_, Upstart, or Systemd standards."
msgstr ""

#: ../controller-ha-pacemaker.rst:29
msgid ""
"Pacemaker ships with a large set of OCF agents (such as those managing MySQL "
"databases, virtual IP addresses, and RabbitMQ), but can also use any agents "
"already installed on your system and can be extended with your own (see the "
"`developer guide <http://www.linux-ha.org/doc/dev-guides/ra-dev-guide."
"html>`_)."
msgstr ""

#: ../controller-ha-pacemaker.rst:35
msgid "The steps to implement the Pacemaker cluster stack are:"
msgstr ""

#: ../controller-ha-pacemaker.rst:37
msgid ":ref:`pacemaker-install`"
msgstr ""

#: ../controller-ha-pacemaker.rst:38
msgid ":ref:`pacemaker-corosync-setup`"
msgstr ""

#: ../controller-ha-pacemaker.rst:39
msgid ":ref:`pacemaker-corosync-start`"
msgstr ""

#: ../controller-ha-pacemaker.rst:40
msgid ":ref:`pacemaker-start`"
msgstr ""

#: ../controller-ha-pacemaker.rst:41
msgid ":ref:`pacemaker-cluster-properties`"
msgstr ""

#: ../controller-ha-pacemaker.rst:46
msgid "Install packages"
msgstr ""

#: ../controller-ha-pacemaker.rst:48
msgid ""
"On any host that is meant to be part of a Pacemaker cluster, establish "
"cluster communications through the Corosync messaging layer. This involves "
"installing the following packages (and their dependencies, which your "
"package manager usually installs automatically):"
msgstr ""

#: ../controller-ha-pacemaker.rst:53
msgid "`pacemaker`"
msgstr ""

#: ../controller-ha-pacemaker.rst:55
msgid "`pcs` (CentOS or RHEL) or crmsh"
msgstr ""

#: ../controller-ha-pacemaker.rst:57
msgid "`corosync`"
msgstr ""

#: ../controller-ha-pacemaker.rst:59
msgid "`fence-agents` (CentOS or RHEL) or cluster-glue"
msgstr ""

#: ../controller-ha-pacemaker.rst:61
msgid "`resource-agents`"
msgstr ""

#: ../controller-ha-pacemaker.rst:63
msgid "`libqb0`"
msgstr ""

#: ../controller-ha-pacemaker.rst:68
msgid "Set up the cluster with pcs"
msgstr ""

#: ../controller-ha-pacemaker.rst:70
msgid "Make sure `pcs` is running and configured to start at boot time:"
msgstr ""

#: ../controller-ha-pacemaker.rst:77
msgid "Set a password for hacluster user on each host:"
msgstr ""

#: ../controller-ha-pacemaker.rst:86
msgid ""
"Since the cluster is a single administrative domain, it is acceptable to use "
"the same password on all nodes."
msgstr ""

#: ../controller-ha-pacemaker.rst:89
msgid ""
"Use that password to authenticate to the nodes that will make up the cluster:"
msgstr ""

#: ../controller-ha-pacemaker.rst:99
msgid ""
"The ``-p`` option is used to give the password on command line and makes it "
"easier to script."
msgstr ""

#: ../controller-ha-pacemaker.rst:102
msgid ""
"Create and name the cluster. Then, start it and enable all components to "
"auto-start at boot time:"
msgstr ""

#: ../controller-ha-pacemaker.rst:114
msgid ""
"In Red Hat Enterprise Linux or CentOS environments, this is a recommended "
"path to perform configuration. For more information, see the `RHEL docs "
"<https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/"
"html/High_Availability_Add-On_Reference/ch-clusteradmin-HAAR.html#s1-"
"clustercreate-HAAR>`_."
msgstr ""

#: ../controller-ha-pacemaker.rst:119
msgid "Set up the cluster with `crmsh`"
msgstr ""

#: ../controller-ha-pacemaker.rst:121
msgid ""
"After installing the Corosync package, you must create the :file:`/etc/"
"corosync/corosync.conf` configuration file."
msgstr ""

#: ../controller-ha-pacemaker.rst:126
msgid ""
"For Ubuntu, you should also enable the Corosync service in the ``/etc/"
"default/corosync`` configuration file."
msgstr ""

#: ../controller-ha-pacemaker.rst:129
msgid ""
"Corosync can be configured to work with either multicast or unicast IP "
"addresses or to use the votequorum library."
msgstr ""

#: ../controller-ha-pacemaker.rst:132
msgid ":ref:`corosync-multicast`"
msgstr ""

#: ../controller-ha-pacemaker.rst:133
msgid ":ref:`corosync-unicast`"
msgstr ""

#: ../controller-ha-pacemaker.rst:134
msgid ":ref:`corosync-votequorum`"
msgstr ""

#: ../controller-ha-pacemaker.rst:139
msgid "Set up Corosync with multicast"
msgstr ""

#: ../controller-ha-pacemaker.rst:141
msgid ""
"Most distributions ship an example configuration file (:file:`corosync.conf."
"example`) as part of the documentation bundled with the Corosync package. An "
"example Corosync configuration file is shown below:"
msgstr ""

#: ../controller-ha-pacemaker.rst:145
msgid ""
"**Example Corosync configuration file for multicast (``corosync.conf``)**"
msgstr ""

#: ../controller-ha-pacemaker.rst:216 ../controller-ha-pacemaker.rst:344
#: ../controller-ha-pacemaker.rst:418 ../controller-ha-pacemaker.rst:598
msgid "Note the following:"
msgstr ""

#: ../controller-ha-pacemaker.rst:218
msgid ""
"The ``token`` value specifies the time, in milliseconds, during which the "
"Corosync token is expected to be transmitted around the ring. When this "
"timeout expires, the token is declared lost, and after "
"``token_retransmits_before_loss_const lost`` tokens, the non-responding "
"processor (cluster node) is declared dead. ``token × "
"token_retransmits_before_loss_const`` is the maximum time a node is allowed "
"to not respond to cluster messages before being considered dead. The default "
"for token is 1000 milliseconds (1 second), with 4 allowed retransmits. These "
"defaults are intended to minimize failover times, but can cause frequent "
"false alarms and unintended failovers in case of short network "
"interruptions. The values used here are safer, albeit with slightly extended "
"failover times."
msgstr ""

#: ../controller-ha-pacemaker.rst:234
msgid ""
"With ``secauth`` enabled, Corosync nodes mutually authenticates using a 128-"
"byte shared secret stored in the :file:`/etc/corosync/authkey` file. This "
"can be generated with the :command:`corosync-keygen` utility. Cluster "
"communications are encrypted when using ``secauth``."
msgstr ""

#: ../controller-ha-pacemaker.rst:240
msgid ""
"In Corosync, configurations use redundant networking (with more than one "
"interface). This means you must select a Redundant Ring Protocol (RRP) mode "
"other than none. We recommend ``active`` as the RRP mode."
msgstr ""

#: ../controller-ha-pacemaker.rst:245
msgid "Note the following about the recommended interface configuration:"
msgstr ""

#: ../controller-ha-pacemaker.rst:247
msgid ""
"Each configured interface must have a unique ``ringnumber``, starting with 0."
msgstr ""

#: ../controller-ha-pacemaker.rst:250
msgid ""
"The ``bindnetaddr`` is the network address of the interfaces to bind to. The "
"example uses two network addresses of /24 IPv4 subnets."
msgstr ""

#: ../controller-ha-pacemaker.rst:253
msgid ""
"Multicast groups (``mcastaddr``) must not be reused across cluster "
"boundaries. No two distinct clusters should ever use the same multicast "
"group. Be sure to select multicast addresses compliant with `RFC 2365, "
"\"Administratively Scoped IP Multicast\" <http://www.ietf.org/rfc/rfc2365."
"txt>`_."
msgstr ""

#: ../controller-ha-pacemaker.rst:260
msgid ""
"For firewall configurations, Corosync communicates over UDP only, and uses "
"``mcastport`` (for receives) and ``mcastport - 1`` (for sends)."
msgstr ""

#: ../controller-ha-pacemaker.rst:263
msgid ""
"The service declaration for the Pacemaker service may be placed in the :file:"
"`corosync.conf` file directly or in its own separate file, :file:`/etc/"
"corosync/service.d/pacemaker`."
msgstr ""

#: ../controller-ha-pacemaker.rst:269
msgid ""
"If you are using Corosync version 2 on Ubuntu 14.04, remove or comment out "
"lines under the service stanza. These stanzas enable Pacemaker to start up. "
"Another potential problem is the boot and shutdown order of Corosync and "
"Pacemaker. To force Pacemaker to start after Corosync and stop before "
"Corosync, fix the start and kill symlinks manually:"
msgstr ""

#: ../controller-ha-pacemaker.rst:280
msgid ""
"The Pacemaker service also requires an additional configuration file ``/etc/"
"corosync/uidgid.d/pacemaker`` to be created with the following content:"
msgstr ""

#: ../controller-ha-pacemaker.rst:291
msgid ""
"Once created, synchronize the :file:`corosync.conf` file (and the :file:"
"`authkey` file if the secauth option is enabled) across all cluster nodes."
msgstr ""

#: ../controller-ha-pacemaker.rst:298
msgid "Set up Corosync with unicast"
msgstr ""

#: ../controller-ha-pacemaker.rst:300
msgid ""
"For environments that do not support multicast, Corosync should be "
"configured for unicast. An example fragment of the :file:`corosync.conf` "
"file for unicastis is shown below:"
msgstr ""

#: ../controller-ha-pacemaker.rst:304
msgid ""
"**Corosync configuration file fragment for unicast (``corosync.conf``)**"
msgstr ""

#: ../controller-ha-pacemaker.rst:346
msgid ""
"If the ``broadcast`` parameter is set to ``yes``, the broadcast address is "
"used for communication. If this option is set, the ``mcastaddr`` parameter "
"should not be set."
msgstr ""

#: ../controller-ha-pacemaker.rst:350
msgid ""
"The ``transport`` directive controls the transport mechanism. To avoid the "
"use of multicast entirely, specify the ``udpu`` unicast transport parameter. "
"This requires specifying the list of members in the ``nodelist`` directive. "
"This potentially makes up the membership before deployment. The default is "
"``udp``. The transport type can also be set to ``udpu`` or ``iba``."
msgstr ""

#: ../controller-ha-pacemaker.rst:357
msgid ""
"Within the ``nodelist`` directive, it is possible to specify specific "
"information about the nodes in the cluster. The directive can contain only "
"the node sub-directive, which specifies every node that should be a member "
"of the membership, and where non-default options are needed. Every node must "
"have at least the ``ring0_addr`` field filled."
msgstr ""

#: ../controller-ha-pacemaker.rst:365
msgid ""
"For UDPU, every node that should be a member of the membership must be "
"specified."
msgstr ""

#: ../controller-ha-pacemaker.rst:367
msgid "Possible options are:"
msgstr ""

#: ../controller-ha-pacemaker.rst:369
msgid ""
"``ring{X}_addr`` specifies the IP address of one of the nodes. ``{X}`` is "
"the ring number."
msgstr ""

#: ../controller-ha-pacemaker.rst:372
msgid ""
"``nodeid`` is optional when using IPv4 and required when using IPv6. This is "
"a 32-bit value specifying the node identifier delivered to the cluster "
"membership service. If this is not specified with IPv4, the node ID is "
"determined from the 32-bit IP address of the system to which the system is "
"bound with ring identifier of 0. The node identifier value of zero is "
"reserved and should not be used."
msgstr ""

#: ../controller-ha-pacemaker.rst:383
msgid "Set up Corosync with votequorum library"
msgstr ""

#: ../controller-ha-pacemaker.rst:385
msgid ""
"The votequorum library is part of the Corosync project. It provides an "
"interface to the vote-based quorum service and it must be explicitly enabled "
"in the Corosync configuration file. The main role of votequorum library is "
"to avoid split-brain situations, but it also provides a mechanism to:"
msgstr ""

#: ../controller-ha-pacemaker.rst:390
msgid "Query the quorum status"
msgstr ""

#: ../controller-ha-pacemaker.rst:392
msgid "List the nodes known to the quorum service"
msgstr ""

#: ../controller-ha-pacemaker.rst:394
msgid "Receive notifications of quorum state changes"
msgstr ""

#: ../controller-ha-pacemaker.rst:396
msgid "Change the number of votes assigned to a node"
msgstr ""

#: ../controller-ha-pacemaker.rst:398
msgid "Change the number of expected votes for a cluster to be quorate"
msgstr ""

#: ../controller-ha-pacemaker.rst:400
msgid ""
"Connect an additional quorum device to allow small clusters remain quorate "
"during node outages"
msgstr ""

#: ../controller-ha-pacemaker.rst:403
msgid ""
"The votequorum library has been created to replace and eliminate ``qdisk``, "
"the disk-based quorum daemon for CMAN, from advanced cluster configurations."
msgstr ""

#: ../controller-ha-pacemaker.rst:406
msgid ""
"A sample votequorum service configuration in the :file:`corosync.conf` file "
"is:"
msgstr ""

#: ../controller-ha-pacemaker.rst:420
msgid ""
"Specifying ``corosync_votequorum`` enables the votequorum library. This is "
"the only required option."
msgstr ""

#: ../controller-ha-pacemaker.rst:423
msgid ""
"The cluster is fully operational with ``expected_votes`` set to 7 nodes "
"(each node has 1 vote), quorum: 4. If a list of nodes is specified as "
"``nodelist``, the ``expected_votes`` value is ignored."
msgstr ""

#: ../controller-ha-pacemaker.rst:427
msgid ""
"When you start up a cluster (all nodes down) and set ``wait_for_all`` to 1, "
"the cluster quorum is held until all nodes are online and have joined the "
"cluster for the first time. This parameter is new in Corosync 2.0."
msgstr ""

#: ../controller-ha-pacemaker.rst:431
msgid ""
"Setting ``last_man_standing`` to 1 enables the Last Man Standing (LMS) "
"feature. By default, it is disabled (set to 0). If a cluster is on the "
"quorum edge (``expected_votes:`` set to 7; ``online nodes:`` set to 4) for "
"longer than the time specified for the ``last_man_standing_window`` "
"parameter, the cluster can recalculate quorum and continue operating even if "
"the next node will be lost. This logic is repeated until the number of "
"online nodes in the cluster reaches 2. In order to allow the cluster to step "
"down from 2 members to only 1, the ``auto_tie_breaker`` parameter needs to "
"be set. We do not recommended this for production environments."
msgstr ""

#: ../controller-ha-pacemaker.rst:442
msgid ""
"``last_man_standing_window`` specifies the time, in milliseconds, required "
"to recalculate quorum after one or more hosts have been lost from the "
"cluster. To perform a new quorum recalculation, the cluster must have quorum "
"for at least the interval specified for ``last_man_standing_window``. The "
"default is 10000ms."
msgstr ""

#: ../controller-ha-pacemaker.rst:452
msgid "Start Corosync"
msgstr ""

#: ../controller-ha-pacemaker.rst:454
msgid ""
"Corosync is started as a regular system service. Depending on your "
"distribution, it may ship with an LSB init script, an upstart job, or a "
"Systemd unit file."
msgstr ""

#: ../controller-ha-pacemaker.rst:458
msgid "Start ``corosync`` with the LSB init script:"
msgstr ""

#: ../controller-ha-pacemaker.rst:464 ../controller-ha-pacemaker.rst:536
msgid "Alternatively:"
msgstr ""

#: ../controller-ha-pacemaker.rst:470
msgid "Start ``corosync`` with upstart:"
msgstr ""

#: ../controller-ha-pacemaker.rst:476
msgid "Start ``corosync`` with systemd unit file:"
msgstr ""

#: ../controller-ha-pacemaker.rst:482
msgid ""
"You can now check the ``corosync`` connectivity with one of these tools."
msgstr ""

#: ../controller-ha-pacemaker.rst:484
msgid ""
"Use the :command:`corosync-cfgtool` utility with the ``-s`` option to get a "
"summary of the health of the communication rings:"
msgstr ""

#: ../controller-ha-pacemaker.rst:499
msgid ""
"Use the :command:`corosync-objctl` utility to dump the Corosync cluster "
"member list:"
msgstr ""

#: ../controller-ha-pacemaker.rst:504
msgid ""
"If you are using Corosync version 2, use the :command:`corosync-cmapctl` "
"utility instead of :command:`corosync-objctl`; it is a direct replacement."
msgstr ""

#: ../controller-ha-pacemaker.rst:517
msgid ""
"You should see a ``status=joined`` entry for each of your constituent "
"cluster nodes."
msgstr ""

#: ../controller-ha-pacemaker.rst:523
msgid "Start Pacemaker"
msgstr ""

#: ../controller-ha-pacemaker.rst:525
msgid ""
"After the ``corosync`` service have been started and you have verified that "
"the cluster is communicating properly, you can start :command:`pacemakerd`, "
"the Pacemaker master control process. Choose one from the following four "
"ways to start it:"
msgstr ""

#: ../controller-ha-pacemaker.rst:530
msgid "Start ``pacemaker`` with the LSB init script:"
msgstr ""

#: ../controller-ha-pacemaker.rst:542
msgid "Start ``pacemaker`` with upstart:"
msgstr ""

#: ../controller-ha-pacemaker.rst:548
msgid "Start ``pacemaker`` with the systemd unit file:"
msgstr ""

#: ../controller-ha-pacemaker.rst:554
msgid ""
"After the ``pacemaker`` service has started, Pacemaker creates a default "
"empty cluster configuration with no resources. Use the :command:`crm_mon` "
"utility to observe the status of ``pacemaker``:"
msgstr ""

#: ../controller-ha-pacemaker.rst:576
msgid "Set basic cluster properties"
msgstr ""

#: ../controller-ha-pacemaker.rst:578
msgid ""
"After you set up your Pacemaker cluster, set a few basic cluster properties:"
msgstr ""

#: ../controller-ha-pacemaker.rst:600
msgid ""
"Setting the ``pe-warn-series-max``, ``pe-input-series-max``, and ``pe-error-"
"series-max`` parameters to 1000 instructs Pacemaker to keep a longer history "
"of the inputs processed and errors and warnings generated by its Policy "
"Engine. This history is useful if you need to troubleshoot the cluster."
msgstr ""

#: ../controller-ha-pacemaker.rst:606
msgid ""
"Pacemaker uses an event-driven approach to cluster state processing. The "
"``cluster-recheck-interval`` parameter (which defaults to 15 minutes) "
"defines the interval at which certain Pacemaker actions occur. It is usually "
"prudent to reduce this to a shorter interval, such as 5 or 3 minutes."
msgstr ""

#: ../controller-ha-pacemaker.rst:612
msgid ""
"By default, STONITH is enabled in Pacemaker, but STONITH mechanisms (to "
"shutdown a node via IPMI or ssh) are not configured. In this case Pacemaker "
"will refuse to start any resources. For production cluster it is recommended "
"to configure appropriate STONITH mechanisms. But for demo or testing "
"purposes STONITH can be disabled completely as follows:"
msgstr ""

#: ../controller-ha-pacemaker.rst:631
msgid "After you make these changes, commit the updated configuration."
msgstr ""

#: ../controller-ha-telemetry.rst:3
msgid "Highly available Telemetry"
msgstr ""

#: ../controller-ha-telemetry.rst:5
msgid ""
"The `Telemetry service <https://docs.openstack.org/admin-guide/common/get-"
"started-telemetry.html>`_ provides a data collection service and an alarming "
"service."
msgstr ""

#: ../controller-ha-telemetry.rst:10
msgid "Telemetry polling agent"
msgstr ""

#: ../controller-ha-telemetry.rst:12
msgid ""
"The Telemetry polling agent can be configured to partition its polling "
"workload between multiple agents. This enables high availability (HA)."
msgstr ""

#: ../controller-ha-telemetry.rst:15
msgid ""
"Both the central and the compute agent can run in an HA deployment. This "
"means that multiple instances of these services can run in parallel with "
"workload partitioning among these running instances."
msgstr ""

#: ../controller-ha-telemetry.rst:19
msgid ""
"The `Tooz <https://pypi.org/project/tooz>`_ library provides the "
"coordination within the groups of service instances. It provides an API "
"above several back ends that can be used for building distributed "
"applications."
msgstr ""

#: ../controller-ha-telemetry.rst:24
msgid ""
"Tooz supports `various drivers <https://docs.openstack.org/tooz/latest/user/"
"drivers.html>`_ including the following back end solutions:"
msgstr ""

#: ../controller-ha-telemetry.rst:29 ../controller-ha-telemetry.rst:32
msgid "Recommended solution by the Tooz project."
msgstr ""

#: ../controller-ha-telemetry.rst:29
msgid "`Zookeeper <https://zookeeper.apache.org/>`_:"
msgstr ""

#: ../controller-ha-telemetry.rst:32
msgid "`Redis <https://redis.io/>`_:"
msgstr ""

#: ../controller-ha-telemetry.rst:35
msgid "Recommended for testing."
msgstr ""

#: ../controller-ha-telemetry.rst:35
msgid "`Memcached <https://memcached.org/>`_:"
msgstr ""

#: ../controller-ha-telemetry.rst:37
msgid ""
"You must configure a supported Tooz driver for the HA deployment of the "
"Telemetry services."
msgstr ""

#: ../controller-ha-telemetry.rst:40
msgid ""
"For information about the required configuration options to set in the :file:"
"`ceilometer.conf`, see the `coordination section <https://docs.openstack.org/"
"ocata/config-reference/telemetry.html>`_ in the OpenStack Configuration "
"Reference."
msgstr ""

#: ../controller-ha-telemetry.rst:47
msgid ""
"Only one instance for the central and compute agent service(s) is able to "
"run and function correctly if the ``backend_url`` option is not set."
msgstr ""

#: ../controller-ha-telemetry.rst:50
msgid ""
"The availability check of the instances is provided by heartbeat messages. "
"When the connection with an instance is lost, the workload will be "
"reassigned within the remaining instances in the next polling cycle."
msgstr ""

#: ../controller-ha-telemetry.rst:56
msgid ""
"Memcached uses a timeout value, which should always be set to a value that "
"is higher than the heartbeat value set for Telemetry."
msgstr ""

#: ../controller-ha-telemetry.rst:59
msgid ""
"For backward compatibility and supporting existing deployments, the central "
"agent configuration supports using different configuration files. This is "
"for groups of service instances that are running in parallel. For enabling "
"this configuration, set a value for the ``partitioning_group_prefix`` option "
"in the `polling section <https://docs.openstack.org/ocata/config-reference/"
"telemetry/telemetry-config-options.html>`_ in the OpenStack Configuration "
"Reference."
msgstr ""

#: ../controller-ha-telemetry.rst:69
msgid ""
"For each sub-group of the central agent pool with the same "
"``partitioning_group_prefix``, a disjoint subset of meters must be polled to "
"avoid samples being missing or duplicated. The list of meters to poll can be "
"set in the :file:`/etc/ceilometer/pipeline.yaml` configuration file. For "
"more information about pipelines see the `Data processing and pipelines "
"<https://docs.openstack.org/admin-guide/telemetry-data-pipelines.html>`_ "
"section."
msgstr ""

#: ../controller-ha-telemetry.rst:77
msgid ""
"To enable the compute agent to run multiple instances simultaneously with "
"workload partitioning, the ``workload_partitioning`` option must be set to "
"``True`` under the `compute section <https://docs.openstack.org/ocata/config-"
"reference/telemetry.html>`_ in the :file:`ceilometer.conf` configuration "
"file."
msgstr ""

#: ../controller-ha-vip.rst:3
msgid "Configure the VIP"
msgstr ""

#: ../controller-ha-vip.rst:5
msgid ""
"You must select and assign a virtual IP address (VIP) that can freely float "
"between cluster nodes."
msgstr ""

#: ../controller-ha-vip.rst:8
msgid ""
"This configuration creates ``vip``, a virtual IP address for use by the API "
"node (``10.0.0.11``)."
msgstr ""

#: ../controller-ha-vip.rst:11
msgid "For ``crmsh``:"
msgstr ""

#: ../controller-ha-vip.rst:18
msgid "For ``pcs``:"
msgstr ""

#: ../controller-ha.rst:3
msgid "Configuring the controller"
msgstr ""

#: ../controller-ha.rst:5
msgid ""
"The cloud controller runs on the management network and must talk to all "
"other services."
msgstr ""

#: ../controller-ha.rst:20
msgid "Overview of highly available controllers"
msgstr ""

#: ../controller-ha.rst:22
msgid ""
"OpenStack is a set of services exposed to the end users as HTTP(s) APIs. "
"Additionally, for your own internal usage, OpenStack requires an SQL "
"database server and AMQP broker. The physical servers, where all the "
"components are running, are called controllers. This modular OpenStack "
"architecture allows you to duplicate all the components and run them on "
"different controllers. By making all the components redundant, it is "
"possible to make OpenStack highly available."
msgstr ""

#: ../controller-ha.rst:31
msgid ""
"In general, we can divide all the OpenStack components into three categories:"
msgstr ""

#: ../controller-ha.rst:33
msgid ""
"OpenStack APIs: APIs that are HTTP(s) stateless services written in python, "
"easy to duplicate and mostly easy to load balance."
msgstr ""

#: ../controller-ha.rst:36
msgid ""
"The SQL relational database server provides stateful type consumed by other "
"components. Supported databases are MySQL, MariaDB, and PostgreSQL. Making "
"the SQL database redundant is complex."
msgstr ""

#: ../controller-ha.rst:40
msgid ""
":term:`Advanced Message Queuing Protocol (AMQP)` provides OpenStack internal "
"stateful communication service."
msgstr ""

#: ../controller-ha.rst:44
msgid "Common deployment architectures"
msgstr ""

#: ../controller-ha.rst:46
msgid ""
"We recommend two primary architectures for making OpenStack highly available."
msgstr ""

#: ../controller-ha.rst:48
msgid ""
"The architectures differ in the sets of services managed by the cluster."
msgstr ""

#: ../controller-ha.rst:51
msgid ""
"Both use a cluster manager, such as Pacemaker or Veritas, to orchestrate the "
"actions of the various services across a set of machines. Because we are "
"focused on FOSS, we refer to these as Pacemaker architectures."
msgstr ""

#: ../controller-ha.rst:56
msgid ""
"Traditionally, Pacemaker has been positioned as an all-encompassing "
"solution. However, as OpenStack services have matured, they are increasingly "
"able to run in an active/active configuration and gracefully tolerate the "
"disappearance of the APIs on which they depend."
msgstr ""

#: ../controller-ha.rst:62
msgid ""
"With this in mind, some vendors are restricting Pacemaker's use to services "
"that must operate in an active/passive mode (such as ``cinder-volume``), "
"those with multiple states (for example, Galera), and those with complex "
"bootstrapping procedures (such as RabbitMQ)."
msgstr ""

#: ../controller-ha.rst:67
msgid ""
"The majority of services, needing no real orchestration, are handled by "
"systemd on each node. This approach avoids the need to coordinate service "
"upgrades or location changes with the cluster and has the added advantage of "
"more easily scaling beyond Corosync's 16 node limit. However, it will "
"generally require the addition of an enterprise monitoring solution such as "
"Nagios or Sensu for those wanting centralized failure reporting."
msgstr ""

#: ../environment-hardware.rst:3
msgid "Hardware considerations for high availability"
msgstr ""

#: ../environment-hardware.rst:5
msgid ""
"When you use high availability, consider the hardware requirements needed "
"for your application."
msgstr ""

#: ../environment-hardware.rst:9
msgid "Hardware setup"
msgstr ""

#: ../environment-hardware.rst:11
msgid "The following are the standard hardware requirements:"
msgstr ""

#: ../environment-hardware.rst:13
msgid ""
"Provider networks: See the *Overview -> Networking Option 1: Provider "
"networks* section of the `Install Guides <https://docs.openstack.org/ocata/"
"install>`_ depending on your distribution."
msgstr ""

#: ../environment-hardware.rst:17
msgid ""
"Self-service networks: See the *Overview -> Networking Option 2: Self-"
"service networks* section of the `Install Guides <https://docs.openstack.org/"
"ocata/install>`_ depending on your distribution."
msgstr ""

#: ../environment-hardware.rst:22
msgid ""
"OpenStack does not require a significant amount of resources and the "
"following minimum requirements should support a proof-of-concept high "
"availability environment with core services and several instances:"
msgstr ""

#: ../environment-hardware.rst:27
msgid "Memory"
msgstr ""

#: ../environment-hardware.rst:27
msgid "NIC"
msgstr ""

#: ../environment-hardware.rst:27
msgid "Node type"
msgstr ""

#: ../environment-hardware.rst:27
msgid "Processor Cores"
msgstr ""

#: ../environment-hardware.rst:27
msgid "Storage"
msgstr ""

#: ../environment-hardware.rst:29
msgid "12 GB"
msgstr ""

#: ../environment-hardware.rst:29
msgid "120 GB"
msgstr ""

#: ../environment-hardware.rst:29 ../environment-hardware.rst:31
msgid "2"
msgstr ""

#: ../environment-hardware.rst:29
msgid "4"
msgstr ""

#: ../environment-hardware.rst:29
msgid "controller node"
msgstr ""

#: ../environment-hardware.rst:31
msgid "12+ GB"
msgstr ""

#: ../environment-hardware.rst:31
msgid "120+ GB"
msgstr ""

#: ../environment-hardware.rst:31
msgid "8+"
msgstr ""

#: ../environment-hardware.rst:31
msgid "compute node"
msgstr ""

#: ../environment-hardware.rst:34
msgid ""
"We recommended that the maximum latency between any two controller nodes is "
"2 milliseconds. Although the cluster software can be tuned to operate at "
"higher latencies, some vendors insist on this value before agreeing to "
"support the installation."
msgstr ""

#: ../environment-hardware.rst:39
msgid "You can use the `ping` command to find the latency between two servers."
msgstr ""

#: ../environment-hardware.rst:42
msgid "Virtualized hardware"
msgstr ""

#: ../environment-hardware.rst:44
msgid ""
"For demonstrations and studying, you can set up a test environment on "
"virtual machines (VMs). This has the following benefits:"
msgstr ""

#: ../environment-hardware.rst:47
msgid ""
"One physical server can support multiple nodes, each of which supports "
"almost any number of network interfaces."
msgstr ""

#: ../environment-hardware.rst:50
msgid ""
"You can take periodic snap shots throughout the installation process and "
"roll back to a working configuration in the event of a problem."
msgstr ""

#: ../environment-hardware.rst:53
msgid ""
"However, running an OpenStack environment on VMs degrades the performance of "
"your instances, particularly if your hypervisor or processor lacks support "
"for hardware acceleration of nested VMs."
msgstr ""

#: ../environment-hardware.rst:59
msgid ""
"When installing highly available OpenStack on VMs, be sure that your "
"hypervisor permits promiscuous mode and disables MAC address filtering on "
"the external network."
msgstr ""

#: ../environment-memcached.rst:3
msgid "Installing Memcached"
msgstr ""

#: ../environment-memcached.rst:5
msgid ""
"Most OpenStack services can use Memcached to store ephemeral data such as "
"tokens. Although Memcached does not support typical forms of redundancy such "
"as clustering, OpenStack services can use almost any number of instances by "
"configuring multiple hostnames or IP addresses."
msgstr ""

#: ../environment-memcached.rst:10
msgid ""
"The Memcached client implements hashing to balance objects among the "
"instances. Failure of an instance only impacts a percentage of the objects, "
"and the client automatically removes it from the list of instances."
msgstr ""

#: ../environment-memcached.rst:14
msgid ""
"To install and configure Memcached, read the `official documentation "
"<https://github.com/Memcached/Memcached/wiki#getting-started>`_."
msgstr ""

#: ../environment-memcached.rst:17
msgid ""
"Memory caching is managed by `oslo.cache <http://specs.openstack.org/"
"openstack/oslo-specs/specs/kilo/oslo-cache-using-dogpile.html>`_. This "
"ensures consistency across all projects when using multiple Memcached "
"servers. The following is an example configuration with three hosts:"
msgstr ""

#: ../environment-memcached.rst:26
msgid ""
"By default, ``controller1`` handles the caching service. If the host goes "
"down, ``controller2`` or ``controller3`` will complete the service."
msgstr ""

#: ../environment-memcached.rst:29
msgid ""
"For more information about Memcached installation, see the *Environment -> "
"Memcached* section in the `Installation Guides <https://docs.openstack.org/"
"ocata/install/>`_ depending on your distribution."
msgstr ""

#: ../environment-ntp.rst:3
msgid "Configure NTP"
msgstr ""

#: ../environment-ntp.rst:5
msgid ""
"You must configure NTP to properly synchronize services among nodes. We "
"recommend that you configure the controller node to reference more accurate "
"(lower stratum) servers and other nodes to reference the controller node. "
"For more information, see the `Installation Guides <https://docs.openstack."
"org/ocata/install/>`_."
msgstr ""

#: ../environment-operatingsystem.rst:3
msgid "Installing the operating system"
msgstr ""

#: ../environment-operatingsystem.rst:5
msgid ""
"The first step in setting up your highly available OpenStack cluster is to "
"install the operating system on each node. Follow the instructions in the "
"*Environment* section of the `Installation Guides <https://docs.openstack."
"org/ocata/install>`_ depending on your distribution."
msgstr ""

#: ../environment-operatingsystem.rst:11
msgid ""
"The OpenStack Installation Guides also include a list of the services that "
"use passwords with important notes about using them."
msgstr ""

#: ../environment-operatingsystem.rst:17
msgid ""
"Before following this guide to configure the highly available OpenStack "
"cluster, ensure the IP ``10.0.0.11`` and hostname ``controller`` are not in "
"use."
msgstr ""

#: ../environment-operatingsystem.rst:21
msgid "This guide uses the following example IP addresses:"
msgstr ""

#: ../environment.rst:3
msgid "Configuring the basic environment"
msgstr ""

#: ../environment.rst:5
msgid ""
"This chapter describes the basic environment for high availability, such as "
"hardware, operating system, common services."
msgstr ""

#: ../index.rst:3
msgid "OpenStack High Availability Guide"
msgstr ""

#: ../index.rst:6
msgid "Abstract"
msgstr ""

#: ../index.rst:8
msgid ""
"This guide describes how to install and configure OpenStack for high "
"availability. It supplements the Installation Guides and assumes that you "
"are familiar with the material in those guides."
msgstr ""

#: ../index.rst:14
msgid ""
"This guide was last updated as of the Ocata release, documenting the "
"OpenStack Ocata, Newton, and Mitaka releases. It may not apply to EOL "
"releases Kilo and Liberty."
msgstr ""

#: ../index.rst:18
msgid ""
"We advise that you read this at your own discretion when planning on your "
"OpenStack cloud."
msgstr ""

#: ../index.rst:21
msgid "This guide is intended as advice only."
msgstr ""

#: ../index.rst:23
msgid ""
"The OpenStack HA team is based on voluntary contributions from the OpenStack "
"community. You can contact the HA community directly in the #openstack-ha "
"channel on Freenode IRC, or by sending mail to the openstack-dev mailing "
"list with the [HA] prefix in the subject header."
msgstr ""

#: ../index.rst:29
msgid ""
"The OpenStack HA community used to hold `weekly IRC meetings <https://wiki."
"openstack.org/wiki/Meetings/HATeamMeeting>`_ to discuss a range of topics "
"relating to HA in OpenStack. The `logs of all past meetings <http://"
"eavesdrop.openstack.org/meetings/ha/>`_ are still available to read."
msgstr ""

#: ../index.rst:37
msgid "Contents"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:3
msgid "The Pacemaker architecture"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:6
msgid "What is a cluster manager?"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:8
msgid ""
"At its core, a cluster is a distributed finite state machine capable of co-"
"ordinating the startup and recovery of inter-related services across a set "
"of machines."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:12
msgid ""
"Even a distributed or replicated application that is able to survive "
"failures on one or more machines can benefit from a cluster manager because "
"a cluster manager has the following capabilities:"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:16
msgid "Awareness of other applications in the stack"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:18
msgid ""
"While SYS-V init replacements like systemd can provide deterministic "
"recovery of a complex stack of services, the recovery is limited to one "
"machine and lacks the context of what is happening on other machines. This "
"context is crucial to determine the difference between a local failure, and "
"clean startup and recovery after a total site failure."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:25
msgid "Awareness of instances on other machines"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:27
msgid ""
"Services like RabbitMQ and Galera have complicated boot-up sequences that "
"require co-ordination, and often serialization, of startup operations across "
"all machines in the cluster. This is especially true after a site-wide "
"failure or shutdown where you must first determine the last machine to be "
"active."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:33
msgid ""
"A shared implementation and calculation of `quorum <https://en.wikipedia.org/"
"wiki/Quorum_(Distributed_Systems)>`_"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:36
msgid ""
"It is very important that all members of the system share the same view of "
"who their peers are and whether or not they are in the majority. Failure to "
"do this leads very quickly to an internal `split-brain <https://en.wikipedia."
"org/wiki/Split-brain_(computing)>`_ state. This is where different parts of "
"the system are pulling in different and incompatible directions."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:43
msgid ""
"Data integrity through fencing (a non-responsive process does not imply it "
"is not doing anything)"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:46
msgid ""
"A single application does not have sufficient context to know the difference "
"between failure of a machine and failure of the application on a machine. "
"The usual practice is to assume the machine is dead and continue working, "
"however this is highly risky. A rogue process or machine could still be "
"responding to requests and generally causing havoc. The safer approach is to "
"make use of remotely accessible power switches and/or network switches and "
"SAN controllers to fence (isolate) the machine before continuing."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:55
msgid "Automated recovery of failed instances"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:57
msgid ""
"While the application can still run after the failure of several instances, "
"it may not have sufficient capacity to serve the required volume of "
"requests. A cluster can automatically recover failed instances to prevent "
"additional load induced failures."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:62
msgid ""
"For these reasons, we highly recommend the use of a cluster manager like "
"`Pacemaker <http://clusterlabs.org>`_."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:66
msgid "Deployment flavors"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:68
msgid ""
"It is possible to deploy three different flavors of the Pacemaker "
"architecture. The two extremes are ``Collapsed`` (where every component runs "
"on every node) and ``Segregated`` (where every component runs in its own 3+ "
"node cluster)."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:73
msgid ""
"Regardless of which flavor you choose, we recommend that clusters contain at "
"least three nodes so that you can take advantage of `quorum <quorum_>`_."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:77
msgid ""
"Quorum becomes important when a failure causes the cluster to split in two "
"or more partitions. In this situation, you want the majority members of the "
"system to ensure the minority are truly dead (through fencing) and continue "
"to host resources. For a two-node cluster, no side has the majority and you "
"can end up in a situation where both sides fence each other, or both sides "
"are running the same services. This can lead to data corruption."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:84
msgid ""
"Clusters with an even number of hosts suffer from similar issues. A single "
"network failure could easily cause a N:N split where neither side retains a "
"majority. For this reason, we recommend an odd number of cluster members "
"when scaling up."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:89
msgid ""
"You can have up to 16 cluster members (this is currently limited by the "
"ability of corosync to scale higher). In extreme cases, 32 and even up to 64 "
"nodes could be possible. However, this is not well tested."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:94
msgid "Collapsed"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:96
msgid ""
"In a collapsed configuration, there is a single cluster of 3 or more nodes "
"on which every component is running."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:99
msgid ""
"This scenario has the advantage of requiring far fewer, if more powerful, "
"machines. Additionally, being part of a single cluster allows you to "
"accurately model the ordering dependencies between components."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:104
msgid "This scenario can be visualized as below."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:109
msgid ""
"You would choose this option if you prefer to have fewer but more powerful "
"boxes."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:112
msgid "This is the most common option and the one we document here."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:115
msgid "Segregated"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:117
msgid ""
"In this configuration, each service runs in a dedicated cluster of 3 or more "
"nodes."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:120
msgid ""
"The benefits of this approach are the physical isolation between components "
"and the ability to add capacity to specific components."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:123
msgid ""
"You would choose this option if you prefer to have more but less powerful "
"boxes."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:126
msgid ""
"This scenario can be visualized as below, where each box below represents a "
"cluster of three or more guests."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:133
msgid "Mixed"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:135
msgid ""
"It is also possible to follow a segregated approach for one or more "
"components that are expected to be a bottleneck and use a collapsed approach "
"for the remainder."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:140
msgid "Proxy server"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:142
msgid ""
"Almost all services in this stack benefit from being proxied. Using a proxy "
"server provides the following capabilities:"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:145
msgid "Load distribution"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:147
msgid ""
"Many services can act in an active/active capacity, however, they usually "
"require an external mechanism for distributing requests to one of the "
"available instances. The proxy server can serve this role."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:152
msgid "API isolation"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:154
msgid ""
"By sending all API access through the proxy, you can clearly identify "
"service interdependencies. You can also move them to locations other than "
"``localhost`` to increase capacity if the need arises."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:159
msgid "Simplified process for adding/removing of nodes"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:161
msgid ""
"Since all API access is directed to the proxy, adding or removing nodes has "
"no impact on the configuration of other services. This can be very useful in "
"upgrade scenarios where an entirely new set of machines can be configured "
"and tested in isolation before telling the proxy to direct traffic there "
"instead."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:167
msgid "Enhanced failure detection"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:169
msgid ""
"The proxy can be configured as a secondary mechanism for detecting service "
"failures. It can even be configured to look for nodes in a degraded state "
"(such as being too far behind in the replication) and take them out of "
"circulation."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:174
msgid ""
"The following components are currently unable to benefit from the use of a "
"proxy server:"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:177
msgid "RabbitMQ"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:179
msgid "MongoDB"
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:181
msgid ""
"We recommend HAProxy as the load balancer, however, there are many "
"alternative load balancing solutions in the marketplace."
msgstr ""

#: ../intro-ha-arch-pacemaker.rst:184
msgid ""
"Generally, we use round-robin to distribute load amongst instances of active/"
"active services. Alternatively, Galera uses ``stick-table`` options to "
"ensure that incoming connection to virtual IP (VIP) are directed to only one "
"of the available back ends. This helps avoid lock contention and prevent "
"deadlocks, although Galera can run active/active. Used in combination with "
"the ``httpchk`` option, this ensure only nodes that are in sync with their "
"peers are allowed to handle requests."
msgstr ""

#: ../intro-ha.rst:3
msgid "Introduction to OpenStack high availability"
msgstr ""

#: ../intro-ha.rst:5
msgid "High availability systems seek to minimize the following issues:"
msgstr ""

#: ../intro-ha.rst:7
msgid ""
"System downtime: Occurs when a user-facing service is unavailable beyond a "
"specified maximum amount of time."
msgstr ""

#: ../intro-ha.rst:10
msgid "Data loss: Accidental deletion or destruction of data."
msgstr ""

#: ../intro-ha.rst:12
msgid ""
"Most high availability systems guarantee protection against system downtime "
"and data loss only in the event of a single failure. However, they are also "
"expected to protect against cascading failures, where a single failure "
"deteriorates into a series of consequential failures. Many service providers "
"guarantee a :term:`Service Level Agreement (SLA)` including uptime "
"percentage of computing service, which is calculated based on the available "
"time and system downtime excluding planned outage time."
msgstr ""

#: ../intro-ha.rst:21
msgid "Redundancy and failover"
msgstr ""

#: ../intro-ha.rst:23
msgid ""
"High availability is implemented with redundant hardware running redundant "
"instances of each service. If one piece of hardware running one instance of "
"a service fails, the system can then failover to use another instance of a "
"service that is running on hardware that did not fail."
msgstr ""

#: ../intro-ha.rst:29
msgid ""
"A crucial aspect of high availability is the elimination of single points of "
"failure (SPOFs). A SPOF is an individual piece of equipment or software that "
"causes system downtime or data loss if it fails. In order to eliminate "
"SPOFs, check that mechanisms exist for redundancy of:"
msgstr ""

#: ../intro-ha.rst:35
msgid "Network components, such as switches and routers"
msgstr ""

#: ../intro-ha.rst:37
msgid "Applications and automatic service migration"
msgstr ""

#: ../intro-ha.rst:39
msgid "Storage components"
msgstr ""

#: ../intro-ha.rst:41
msgid "Facility services such as power, air conditioning, and fire protection"
msgstr ""

#: ../intro-ha.rst:43
msgid ""
"In the event that a component fails and a back-up system must take on its "
"load, most high availability systems will replace the failed component as "
"quickly as possible to maintain necessary redundancy. This way time spent in "
"a degraded protection state is minimized."
msgstr ""

#: ../intro-ha.rst:48
msgid ""
"Most high availability systems fail in the event of multiple independent "
"(non-consequential) failures. In this case, most implementations favor "
"protecting data over maintaining availability."
msgstr ""

#: ../intro-ha.rst:52
msgid ""
"High availability systems typically achieve an uptime percentage of 99.99% "
"or more, which roughly equates to less than an hour of cumulative downtime "
"per year. In order to achieve this, high availability systems should keep "
"recovery times after a failure to about one to two minutes, sometimes "
"significantly less."
msgstr ""

#: ../intro-ha.rst:58
msgid ""
"OpenStack currently meets such availability requirements for its own "
"infrastructure services, meaning that an uptime of 99.99% is feasible for "
"the OpenStack infrastructure proper. However, OpenStack does not guarantee "
"99.99% availability for individual guest instances."
msgstr ""

#: ../intro-ha.rst:63
msgid ""
"This document discusses some common methods of implementing highly available "
"systems, with an emphasis on the core OpenStack services and other open "
"source services that are closely aligned with OpenStack."
msgstr ""

#: ../intro-ha.rst:67
msgid ""
"You will need to address high availability concerns for any applications "
"software that you run on your OpenStack environment. The important thing is "
"to make sure that your services are redundant and available. How you achieve "
"that is up to you."
msgstr ""

#: ../intro-ha.rst:73
msgid "Stateless versus stateful services"
msgstr ""

#: ../intro-ha.rst:75
msgid "The following are the definitions of stateless and stateful services:"
msgstr ""

#: ../intro-ha.rst:78
msgid ""
"A service that provides a response after your request and then requires no "
"further attention. To make a stateless service highly available, you need to "
"provide redundant instances and load balance them. OpenStack services that "
"are stateless include ``nova-api``, ``nova-conductor``, ``glance-api``, "
"``keystone-api``, ``neutron-api``, and ``nova-scheduler``."
msgstr ""

#: ../intro-ha.rst:84
msgid "Stateless service"
msgstr ""

#: ../intro-ha.rst:87
msgid ""
"A service where subsequent requests to the service depend on the results of "
"the first request. Stateful services are more difficult to manage because a "
"single action typically involves more than one request. Providing additional "
"instances and load balancing does not solve the problem. For example, if the "
"horizon user interface reset itself every time you went to a new page, it "
"would not be very useful. OpenStack services that are stateful include the "
"OpenStack database and message queue. Making stateful services highly "
"available can depend on whether you choose an active/passive or active/"
"active configuration."
msgstr ""

#: ../intro-ha.rst:97
msgid "Stateful service"
msgstr ""

#: ../intro-ha.rst:100
msgid "Active/passive versus active/active"
msgstr ""

#: ../intro-ha.rst:102
msgid ""
"Stateful services can be configured as active/passive or active/active, "
"which are defined as follows:"
msgstr ""

#: ../intro-ha.rst:106
msgid ""
"Maintains a redundant instance that can be brought online when the active "
"service fails. For example, OpenStack writes to the main database while "
"maintaining a disaster recovery database that can be brought online if the "
"main database fails."
msgstr ""

#: ../intro-ha.rst:112
msgid ""
"A typical active/passive installation for a stateful service maintains a "
"replacement resource that can be brought online when required. Requests are "
"handled using a :term:`virtual IP address (VIP)` that facilitates returning "
"to service with minimal reconfiguration. A separate application (such as "
"Pacemaker or Corosync) monitors these services, bringing the backup online "
"as necessary."
msgstr ""

#: ../intro-ha.rst:117
msgid ":term:`active/passive configuration`"
msgstr ""

#: ../intro-ha.rst:120
msgid ""
"Each service also has a backup but manages both the main and redundant "
"systems concurrently. This way, if there is a failure, the user is unlikely "
"to notice. The backup system is already online and takes on increased load "
"while the main system is fixed and brought back online."
msgstr ""

#: ../intro-ha.rst:126
msgid ""
"Typically, an active/active installation for a stateless service maintains a "
"redundant instance, and requests are load balanced using a virtual IP "
"address and a load balancer such as HAProxy."
msgstr ""

#: ../intro-ha.rst:130
msgid ""
"A typical active/active installation for a stateful service includes "
"redundant services, with all instances having an identical state. In other "
"words, updates to one instance of a database update all other instances. "
"This way a request to one instance is the same as a request to any other. A "
"load balancer manages the traffic to these systems, ensuring that "
"operational systems always handle the request."
msgstr ""

#: ../intro-ha.rst:136
msgid ":term:`active/active configuration`"
msgstr ""

#: ../intro-ha.rst:139
msgid "Clusters and quorums"
msgstr ""

#: ../intro-ha.rst:141
msgid ""
"The quorum specifies the minimal number of nodes that must be functional in "
"a cluster of redundant nodes in order for the cluster to remain functional. "
"When one node fails and failover transfers control to other nodes, the "
"system must ensure that data and processes remain sane. To determine this, "
"the contents of the remaining nodes are compared and, if there are "
"discrepancies, a majority rules algorithm is implemented."
msgstr ""

#: ../intro-ha.rst:149
msgid ""
"For this reason, each cluster in a high availability environment should have "
"an odd number of nodes and the quorum is defined as more than a half of the "
"nodes. If multiple nodes fail so that the cluster size falls below the "
"quorum value, the cluster itself fails."
msgstr ""

#: ../intro-ha.rst:155
msgid ""
"For example, in a seven-node cluster, the quorum should be set to "
"``floor(7/2) + 1 == 4``. If quorum is four and four nodes fail "
"simultaneously, the cluster itself would fail, whereas it would continue to "
"function, if no more than three nodes fail. If split to partitions of three "
"and four nodes respectively, the quorum of four nodes would continue to "
"operate the majority partition and stop or fence the minority one (depending "
"on the no-quorum-policy cluster configuration)."
msgstr ""

#: ../intro-ha.rst:163
msgid ""
"And the quorum could also have been set to three, just as a configuration "
"example."
msgstr ""

#: ../intro-ha.rst:168
msgid ""
"We do not recommend setting the quorum to a value less than ``floor(n/2) + "
"1`` as it would likely cause a split-brain in a face of network partitions."
msgstr ""

#: ../intro-ha.rst:171
msgid ""
"When four nodes fail simultaneously, the cluster would continue to function "
"as well. But if split to partitions of three and four nodes respectively, "
"the quorum of three would have made both sides to attempt to fence the other "
"and host resources. Without fencing enabled, it would go straight to running "
"two copies of each resource."
msgstr ""

#: ../intro-ha.rst:177
msgid ""
"This is why setting the quorum to a value less than ``floor(n/2) + 1`` is "
"dangerous. However it may be required for some specific cases, such as a "
"temporary measure at a point it is known with 100% certainty that the other "
"nodes are down."
msgstr ""

#: ../intro-ha.rst:182
msgid ""
"When configuring an OpenStack environment for study or demonstration "
"purposes, it is possible to turn off the quorum checking. Production systems "
"should always run with quorum enabled."
msgstr ""

#: ../intro-ha.rst:188
msgid "Single-controller high availability mode"
msgstr ""

#: ../intro-ha.rst:190
msgid ""
"OpenStack supports a single-controller high availability mode that is "
"managed by the services that manage highly available environments but is not "
"actually highly available because no redundant controllers are configured to "
"use for failover. This environment can be used for study and demonstration "
"but is not appropriate for a production environment."
msgstr ""

#: ../intro-ha.rst:197
msgid ""
"It is possible to add controllers to such an environment to convert it into "
"a truly highly available environment."
msgstr ""

#: ../intro-ha.rst:200
msgid ""
"High availability is not for every user. It presents some challenges. High "
"availability may be too complex for databases or systems with large amounts "
"of data. Replication can slow large systems down. Different setups have "
"different prerequisites. Read the guidelines for each setup."
msgstr ""

#: ../intro-ha.rst:208
msgid "High availability is turned off as the default in OpenStack setups."
msgstr ""

#: ../networking-ha-dhcp.rst:3
msgid "Run Networking DHCP agent"
msgstr ""

#: ../networking-ha-dhcp.rst:5
msgid ""
"The OpenStack Networking (neutron) service has a scheduler that lets you run "
"multiple agents across nodes. The DHCP agent can be natively highly "
"available."
msgstr ""

#: ../networking-ha-dhcp.rst:8
msgid ""
"To configure the number of DHCP agents per network, modify the "
"``dhcp_agents_per_network`` parameter in the :file:`/etc/neutron/neutron."
"conf` file. By default this is set to 1. To achieve high availability, "
"assign more than one DHCP agent per network. For more information, see `High-"
"availability for DHCP <https://docs.openstack.org/newton/networking-guide/"
"config-dhcp-ha.html>`_."
msgstr ""

#: ../networking-ha-l3.rst:3
msgid "Run Networking L3 agent"
msgstr ""

#: ../networking-ha-l3.rst:5
msgid ""
"The Networking (neutron) service L3 agent is scalable, due to the scheduler "
"that supports Virtual Router Redundancy Protocol (VRRP) to distribute "
"virtual routers across multiple nodes. For more information about the VRRP "
"and keepalived, see `Linux bridge: High availability using VRRP <https://"
"docs.openstack.org/newton/networking-guide/config-dvr-ha-snat.html>`_ and "
"`Open vSwitch: High availability using VRRP <https://docs.openstack.org/"
"newton/networking-guide/deploy-ovs-ha-vrrp.html>`_."
msgstr ""

#: ../networking-ha-l3.rst:13
msgid ""
"To enable high availability for configured routers, edit the :file:`/etc/"
"neutron/neutron.conf` file to set the following values:"
msgstr ""

#: ../networking-ha-l3.rst:17
msgid "/etc/neutron/neutron.conf parameters for high availability"
msgstr ""

#: ../networking-ha-l3.rst:21
msgid "Parameter"
msgstr ""

#: ../networking-ha-l3.rst:22
msgid "Value"
msgstr ""

#: ../networking-ha-l3.rst:23
msgid "Description"
msgstr ""

#: ../networking-ha-l3.rst:24
msgid "l3_ha"
msgstr ""

#: ../networking-ha-l3.rst:25 ../networking-ha-l3.rst:28
msgid "True"
msgstr ""

#: ../networking-ha-l3.rst:26
msgid "All routers are highly available by default."
msgstr ""

#: ../networking-ha-l3.rst:27
msgid "allow_automatic_l3agent_failover"
msgstr ""

#: ../networking-ha-l3.rst:29
msgid "Set automatic L3 agent failover for routers"
msgstr ""

#: ../networking-ha-l3.rst:30
msgid "max_l3_agents_per_router"
msgstr ""

#: ../networking-ha-l3.rst:31 ../networking-ha-l3.rst:34
msgid "2 or more"
msgstr ""

#: ../networking-ha-l3.rst:32
msgid "Maximum number of network nodes to use for the HA router."
msgstr ""

#: ../networking-ha-l3.rst:33
msgid "min_l3_agents_per_router"
msgstr ""

#: ../networking-ha-l3.rst:35
msgid ""
"Minimum number of network nodes to use for the HA router. A new router can "
"be created only if this number of network nodes are available."
msgstr ""

#: ../networking-ha.rst:3
msgid "Configuring the networking services"
msgstr ""

#: ../networking-ha.rst:11
msgid ""
"Configure networking on each node. See the basic information about "
"configuring networking in the *Networking service* section of the `Install "
"Guides <https://docs.openstack.org/ocata/install/>`_, depending on your "
"distribution."
msgstr ""

#: ../networking-ha.rst:17
msgid "OpenStack network nodes contain:"
msgstr ""

#: ../networking-ha.rst:19
msgid ":doc:`Networking DHCP agent<networking-ha-dhcp>`"
msgstr ""

#: ../networking-ha.rst:20
msgid ":doc:`Neutron L3 agent<networking-ha-l3>`"
msgstr ""

#: ../networking-ha.rst:21
msgid "Networking L2 agent"
msgstr ""

#: ../networking-ha.rst:25
msgid ""
"The L2 agent cannot be distributed and highly available. Instead, it must be "
"installed on each data forwarding node to control the virtual network driver "
"such as Open vSwitch or Linux Bridge. One L2 agent runs per node and "
"controls its virtual interfaces."
msgstr ""

#: ../networking-ha.rst:33
msgid ""
"For Liberty, you can not have the standalone network nodes. The Networking "
"services are run on the controller nodes. In this guide, the term `network "
"nodes` is used for convenience."
msgstr ""

#: ../shared-database-configure.rst:3
msgid "Configuration"
msgstr ""

#: ../shared-database-configure.rst:5
msgid ""
"Before you launch Galera Cluster, you need to configure the server and the "
"database to operate as part of the cluster."
msgstr ""

#: ../shared-database-configure.rst:9
msgid "Configuring the server"
msgstr ""

#: ../shared-database-configure.rst:11
msgid ""
"Certain services running on the underlying operating system of your "
"OpenStack database may block Galera Cluster from normal operation or prevent "
"``mysqld`` from achieving network connectivity with the cluster."
msgstr ""

#: ../shared-database-configure.rst:16
msgid "Firewall"
msgstr ""

#: ../shared-database-configure.rst:18
msgid ""
"Galera Cluster requires that you open the following ports to network traffic:"
msgstr ""

#: ../shared-database-configure.rst:20
msgid ""
"On ``3306``, Galera Cluster uses TCP for database client connections and "
"State Snapshot Transfers methods that require the client, (that is, "
"``mysqldump``)."
msgstr ""

#: ../shared-database-configure.rst:23
msgid ""
"On ``4567``, Galera Cluster uses TCP for replication traffic. Multicast "
"replication uses both TCP and UDP on this port."
msgstr ""

#: ../shared-database-configure.rst:25
msgid "On ``4568``, Galera Cluster uses TCP for Incremental State Transfers."
msgstr ""

#: ../shared-database-configure.rst:26
msgid ""
"On ``4444``, Galera Cluster uses TCP for all other State Snapshot Transfer "
"methods."
msgstr ""

#: ../shared-database-configure.rst:31
msgid ""
"For more information on firewalls, see `firewalls and default ports <https://"
"docs.openstack.org/admin-guide/firewalls-default-ports.html>`_ in OpenStack "
"Administrator Guide."
msgstr ""

#: ../shared-database-configure.rst:35
msgid "This can be achieved using the :command:`iptables` command:"
msgstr ""

#: ../shared-database-configure.rst:43
msgid ""
"Make sure to save the changes once you are done. This will vary depending on "
"your distribution:"
msgstr ""

#: ../shared-database-configure.rst:46
msgid ""
"For `Ubuntu <https://askubuntu.com/questions/66890/how-can-i-make-a-specific-"
"set-of-iptables-rules-permanent#66905>`_"
msgstr ""

#: ../shared-database-configure.rst:47
msgid ""
"For `Fedora <https://fedoraproject.org/wiki/How_to_edit_iptables_rules>`_"
msgstr ""

#: ../shared-database-configure.rst:49
msgid ""
"Alternatively, make modifications using the ``firewall-cmd`` utility for "
"FirewallD that is available on many Linux distributions:"
msgstr ""

#: ../shared-database-configure.rst:58
msgid "SELinux"
msgstr ""

#: ../shared-database-configure.rst:60
msgid ""
"Security-Enhanced Linux is a kernel module for improving security on Linux "
"operating systems. It is commonly enabled and configured by default on Red "
"Hat-based distributions. In the context of Galera Cluster, systems with "
"SELinux may block the database service, keep it from starting, or prevent it "
"from establishing network connections with the cluster."
msgstr ""

#: ../shared-database-configure.rst:66
msgid ""
"To configure SELinux to permit Galera Cluster to operate, you may need to "
"use the ``semanage`` utility to open the ports it uses. For example:"
msgstr ""

#: ../shared-database-configure.rst:74
msgid ""
"Older versions of some distributions, which do not have an up-to-date policy "
"for securing Galera, may also require SELinux to be more relaxed about "
"database access and actions:"
msgstr ""

#: ../shared-database-configure.rst:84
msgid ""
"Bear in mind, leaving SELinux in permissive mode is not a good security "
"practice. Over the longer term, you need to develop a security policy for "
"Galera Cluster and then switch SELinux back into enforcing mode."
msgstr ""

#: ../shared-database-configure.rst:89
msgid ""
"For more information on configuring SELinux to work with Galera Cluster, see "
"the `SELinux Documentation <http://galeracluster.com/documentation-webpages/"
"selinux.html>`_"
msgstr ""

#: ../shared-database-configure.rst:94
msgid "AppArmor"
msgstr ""

#: ../shared-database-configure.rst:96
msgid ""
"Application Armor is a kernel module for improving security on Linux "
"operating systems. It is developed by Canonical and commonly used on Ubuntu-"
"based distributions. In the context of Galera Cluster, systems with AppArmor "
"may block the database service from operating normally."
msgstr ""

#: ../shared-database-configure.rst:101
msgid ""
"To configure AppArmor to work with Galera Cluster, complete the following "
"steps on each cluster node:"
msgstr ""

#: ../shared-database-configure.rst:104
msgid ""
"Create a symbolic link for the database server in the ``disable`` directory:"
msgstr ""

#: ../shared-database-configure.rst:110
msgid ""
"Restart AppArmor. For servers that use ``init``, run the following command:"
msgstr ""

#: ../shared-database-configure.rst:116 ../shared-database-manage.rst:40
#: ../shared-database-manage.rst:67
msgid "For servers that use ``systemd``, run the following command:"
msgstr ""

#: ../shared-database-configure.rst:122
msgid "AppArmor now permits Galera Cluster to operate."
msgstr ""

#: ../shared-database-configure.rst:125
msgid "Database configuration"
msgstr ""

#: ../shared-database-configure.rst:127
msgid ""
"MySQL databases, including MariaDB and Percona XtraDB, manage their "
"configurations using a ``my.cnf`` file, which is typically located in the ``/"
"etc`` directory. Configuration options available in these databases are also "
"available in Galera Cluster, with some restrictions and several additions."
msgstr ""

#: ../shared-database-configure.rst:157
msgid "Configuring mysqld"
msgstr ""

#: ../shared-database-configure.rst:159
msgid ""
"While all of the configuration parameters available to the standard MySQL, "
"MariaDB, or Percona XtraDB database servers are available in Galera Cluster, "
"there are some that you must define an outset to avoid conflict or "
"unexpected behavior."
msgstr ""

#: ../shared-database-configure.rst:164
msgid ""
"Ensure that the database server is not bound only to the localhost: "
"``127.0.0.1``. Also, do not bind it to ``0.0.0.0``. Binding to the localhost "
"or ``0.0.0.0`` makes ``mySQL`` bind to all IP addresses on the machine, "
"including the virtual IP address causing ``HAProxy`` not to start. Instead, "
"bind to the management IP address of the controller node to enable access by "
"other nodes through the management network:"
msgstr ""

#: ../shared-database-configure.rst:175
msgid ""
"Ensure that the binary log format is set to use row-level replication, as "
"opposed to statement-level replication:"
msgstr ""

#: ../shared-database-configure.rst:184
msgid "Configuring InnoDB"
msgstr ""

#: ../shared-database-configure.rst:186
msgid ""
"Galera Cluster does not support non-transactional storage engines and "
"requires that you use InnoDB by default. There are some additional "
"parameters that you must define to avoid conflicts."
msgstr ""

#: ../shared-database-configure.rst:190
msgid "Ensure that the default storage engine is set to InnoDB:"
msgstr ""

#: ../shared-database-configure.rst:196
msgid ""
"Ensure that the InnoDB locking mode for generating auto-increment values is "
"set to ``2``, which is the interleaved locking mode:"
msgstr ""

#: ../shared-database-configure.rst:203
msgid ""
"Do not change this value. Other modes may cause ``INSERT`` statements on "
"tables with auto-increment columns to fail as well as unresolved deadlocks "
"that leave the system unresponsive."
msgstr ""

#: ../shared-database-configure.rst:207
msgid ""
"Ensure that the InnoDB log buffer is written to file once per second, rather "
"than on each commit, to improve performance:"
msgstr ""

#: ../shared-database-configure.rst:214
msgid ""
"Setting this parameter to ``0`` or ``2`` can improve performance, but it "
"introduces certain dangers. Operating system failures can erase the last "
"second of transactions. While you can recover this data from another node, "
"if the cluster goes down at the same time (in the event of a data center "
"power outage), you lose this data permanently."
msgstr ""

#: ../shared-database-configure.rst:220
msgid ""
"Define the InnoDB memory buffer pool size. The default value is 128 MB, but "
"to compensate for Galera Cluster's additional memory usage, scale your usual "
"value back by 5%:"
msgstr ""

#: ../shared-database-configure.rst:230
msgid "Configuring wsrep replication"
msgstr ""

#: ../shared-database-configure.rst:232
msgid ""
"Galera Cluster configuration parameters all have the ``wsrep_`` prefix. You "
"must define the following parameters for each cluster node in your OpenStack "
"database."
msgstr ""

#: ../shared-database-configure.rst:236
msgid ""
"**wsrep Provider**: The Galera Replication Plugin serves as the ``wsrep`` "
"provider for Galera Cluster. It is installed on your system as the "
"``libgalera_smm.so`` file. Define the path to this file in your ``my.cnf``:"
msgstr ""

#: ../shared-database-configure.rst:245
msgid "**Cluster Name**: Define an arbitrary name for your cluster."
msgstr ""

#: ../shared-database-configure.rst:251
msgid ""
"You must use the same name on every cluster node. The connection fails when "
"this value does not match."
msgstr ""

#: ../shared-database-configure.rst:254
msgid "**Cluster Address**: List the IP addresses for each cluster node."
msgstr ""

#: ../shared-database-configure.rst:260
msgid ""
"Replace the IP addresses given here with comma-separated list of each "
"OpenStack database in your cluster."
msgstr ""

#: ../shared-database-configure.rst:263
msgid "**Node Name**: Define the logical name of the cluster node."
msgstr ""

#: ../shared-database-configure.rst:269
msgid "**Node Address**: Define the IP address of the cluster node."
msgstr ""

#: ../shared-database-configure.rst:276
msgid "Additional parameters"
msgstr ""

#: ../shared-database-configure.rst:278
msgid ""
"For a complete list of the available parameters, run the ``SHOW VARIABLES`` "
"command from within the database client:"
msgstr ""

#: ../shared-database-configure.rst:299
msgid ""
"For documentation about these parameters, ``wsrep`` provider option, and "
"status variables available in Galera Cluster, see the Galera cluster "
"`Reference <http://galeracluster.com/documentation-webpages/reference."
"html>`_."
msgstr ""

#: ../shared-database-manage.rst:3
msgid "Management"
msgstr ""

#: ../shared-database-manage.rst:5
msgid ""
"When you finish installing and configuring the OpenStack database, you can "
"initialize the Galera Cluster."
msgstr ""

#: ../shared-database-manage.rst:11
msgid "Database hosts with Galera Cluster installed"
msgstr ""

#: ../shared-database-manage.rst:12
msgid "A minimum of three hosts"
msgstr ""

#: ../shared-database-manage.rst:13
msgid "No firewalls between the hosts"
msgstr ""

#: ../shared-database-manage.rst:14
msgid "SELinux and AppArmor set to permit access to ``mysqld``"
msgstr ""

#: ../shared-database-manage.rst:15
msgid ""
"The correct path to ``libgalera_smm.so`` given to the ``wsrep_provider`` "
"parameter"
msgstr ""

#: ../shared-database-manage.rst:19
msgid "Initializing the cluster"
msgstr ""

#: ../shared-database-manage.rst:21
msgid ""
"In the Galera Cluster, the Primary Component is the cluster of database "
"servers that replicate into each other. In the event that a cluster node "
"loses connectivity with the Primary Component, it defaults into a non-"
"operational state, to avoid creating or serving inconsistent data."
msgstr ""

#: ../shared-database-manage.rst:27
msgid ""
"By default, cluster nodes do not start as part of a Primary Component. In "
"the Primary Component, replication and state transfers bring all databases "
"to the same state."
msgstr ""

#: ../shared-database-manage.rst:31
msgid "To start the cluster, complete the following steps:"
msgstr ""

#: ../shared-database-manage.rst:33
msgid ""
"Initialize the Primary Component on one cluster node. For servers that use "
"``init``, run the following command:"
msgstr ""

#: ../shared-database-manage.rst:46
msgid ""
"Once the database server starts, check the cluster status using the "
"``wsrep_cluster_size`` status variable. From the database client, run the "
"following command:"
msgstr ""

#: ../shared-database-manage.rst:60
msgid ""
"Start the database server on all other cluster nodes. For servers that use "
"``init``, run the following command:"
msgstr ""

#: ../shared-database-manage.rst:73
msgid ""
"When you have all cluster nodes started, log into the database client of any "
"cluster node and check the ``wsrep_cluster_size`` status variable again:"
msgstr ""

#: ../shared-database-manage.rst:87
msgid ""
"When each cluster node starts, it checks the IP addresses given to the "
"``wsrep_cluster_address`` parameter. It then attempts to establish network "
"connectivity with a database server running there. Once it establishes a "
"connection, it attempts to join the Primary Component, requesting a state "
"transfer as needed to bring itself into sync with the cluster."
msgstr ""

#: ../shared-database-manage.rst:96
msgid ""
"In the event that you need to restart any cluster node, you can do so. When "
"the database server comes back it, it establishes connectivity with the "
"Primary Component and updates itself to any changes it may have missed while "
"down."
msgstr ""

#: ../shared-database-manage.rst:102
msgid "Restarting the cluster"
msgstr ""

#: ../shared-database-manage.rst:104
msgid ""
"Individual cluster nodes can stop and be restarted without issue. When a "
"database loses its connection or restarts, the Galera Cluster brings it back "
"into sync once it reestablishes connection with the Primary Component. In "
"the event that you need to restart the entire cluster, identify the most "
"advanced cluster node and initialize the Primary Component on that node."
msgstr ""

#: ../shared-database-manage.rst:111
msgid ""
"To find the most advanced cluster node, you need to check the sequence "
"numbers, or the ``seqnos``, on the last committed transaction for each. You "
"can find this by viewing ``grastate.dat`` file in database directory:"
msgstr ""

#: ../shared-database-manage.rst:125
msgid ""
"Alternatively, if the database server is running, use the "
"``wsrep_last_committed`` status variable:"
msgstr ""

#: ../shared-database-manage.rst:138
msgid ""
"This value increments with each transaction, so the most advanced node has "
"the highest sequence number and therefore is the most up to date."
msgstr ""

#: ../shared-database-manage.rst:142
msgid "Configuration tips"
msgstr ""

#: ../shared-database-manage.rst:145
msgid "Deployment strategies"
msgstr ""

#: ../shared-database-manage.rst:147
msgid "Galera can be configured using one of the following strategies:"
msgstr ""

#: ../shared-database-manage.rst:150
msgid "Each instance has its own IP address:"
msgstr ""

#: ../shared-database-manage.rst:152
msgid ""
"OpenStack services are configured with the list of these IP addresses so "
"they can select one of the addresses from those available."
msgstr ""

#: ../shared-database-manage.rst:156
msgid "Galera runs behind HAProxy:"
msgstr ""

#: ../shared-database-manage.rst:158
msgid ""
"HAProxy load balances incoming requests and exposes just one IP address for "
"all the clients."
msgstr ""

#: ../shared-database-manage.rst:161
msgid ""
"Galera synchronous replication guarantees a zero slave lag. The failover "
"procedure completes once HAProxy detects that the active back end has gone "
"down and switches to the backup one, which is then marked as ``UP``. If no "
"back ends are ``UP``, the failover procedure finishes only when the Galera "
"Cluster has been successfully reassembled. The SLA is normally no more than "
"5 minutes."
msgstr ""

#: ../shared-database-manage.rst:169
msgid ""
"Use MySQL/Galera in active/passive mode to avoid deadlocks on ``SELECT ... "
"FOR UPDATE`` type queries (used, for example, by nova and neutron). This "
"issue is discussed in the following:"
msgstr ""

#: ../shared-database-manage.rst:173
msgid ""
"`IMPORTANT: MySQL Galera does *not* support SELECT ... FOR UPDATE <http://"
"lists.openstack.org/pipermail/openstack-dev/2014-May/035264.html>`_"
msgstr ""

#: ../shared-database-manage.rst:175
msgid ""
"`Understanding reservations, concurrency, and locking in Nova <http://www."
"joinfu.com/2015/01/understanding-reservations-concurrency-locking-in-nova/>`_"
msgstr ""

#: ../shared-database-manage.rst:181
msgid ""
"If you use HAProxy as a load-balancing client to provide access to the "
"Galera Cluster, as described in the :doc:`controller-ha-haproxy`, you can "
"use the ``clustercheck`` utility to improve health checks."
msgstr ""

#: ../shared-database-manage.rst:185
msgid ""
"Create a configuration file for ``clustercheck`` at ``/etc/sysconfig/"
"clustercheck``:"
msgstr ""

#: ../shared-database-manage.rst:196
msgid ""
"For Ubuntu 16.04.1: Create a configuration file for ``clustercheck`` at ``/"
"etc/default/clustercheck``."
msgstr ""

#: ../shared-database-manage.rst:199
msgid ""
"Log in to the database client and grant the ``clustercheck`` user "
"``PROCESS`` privileges:"
msgstr ""

#: ../shared-database-manage.rst:209
msgid ""
"You only need to do this on one cluster node. Galera Cluster replicates the "
"user to all the others."
msgstr ""

#: ../shared-database-manage.rst:212
msgid ""
"Create a configuration file for the HAProxy monitor service, at ``/etc/"
"xinetd.d/galera-monitor``:"
msgstr ""

#: ../shared-database-manage.rst:235
msgid ""
"Start the ``xinetd`` daemon for ``clustercheck``. For servers that use "
"``init``, run the following commands:"
msgstr ""

#: ../shared-database-manage.rst:243
msgid "For servers that use ``systemd``, run the following commands:"
msgstr ""

#: ../shared-database.rst:3
msgid "Database (Galera Cluster) for high availability"
msgstr ""

#: ../shared-database.rst:11
msgid ""
"The first step is to install the database that sits at the heart of the "
"cluster. To implement high availability, run an instance of the database on "
"each controller node and use Galera Cluster to provide replication between "
"them. Galera Cluster is a synchronous multi-master database cluster, based "
"on MySQL and the InnoDB storage engine. It is a high-availability service "
"that provides high system uptime, no data loss, and scalability for growth."
msgstr ""

#: ../shared-database.rst:18
msgid ""
"You can achieve high availability for the OpenStack database in many "
"different ways, depending on the type of database that you want to use. "
"There are three implementations of Galera Cluster available to you:"
msgstr ""

#: ../shared-database.rst:22
msgid ""
"`Galera Cluster for MySQL <http://galeracluster.com>`_: The MySQL reference "
"implementation from Codership, Oy."
msgstr ""

#: ../shared-database.rst:24
msgid ""
"`MariaDB Galera Cluster <https://mariadb.org>`_: The MariaDB implementation "
"of Galera Cluster, which is commonly supported in environments based on Red "
"Hat distributions."
msgstr ""

#: ../shared-database.rst:27
msgid ""
"`Percona XtraDB Cluster <https://www.percona.com>`_: The XtraDB "
"implementation of Galera Cluster from Percona."
msgstr ""

#: ../shared-database.rst:30
msgid ""
"In addition to Galera Cluster, you can also achieve high availability "
"through other database options, such as PostgreSQL, which has its own "
"replication system."
msgstr ""

#: ../shared-messaging.rst:3
msgid "Messaging service for high availability"
msgstr ""

#: ../shared-messaging.rst:5
msgid ""
"An AMQP (Advanced Message Queuing Protocol) compliant message bus is "
"required for most OpenStack components in order to coordinate the execution "
"of jobs entered into the system."
msgstr ""

#: ../shared-messaging.rst:9
msgid ""
"The most popular AMQP implementation used in OpenStack installations is "
"RabbitMQ."
msgstr ""

#: ../shared-messaging.rst:12
msgid ""
"RabbitMQ nodes fail over on the application and the infrastructure layers."
msgstr ""

#: ../shared-messaging.rst:14
msgid ""
"The application layer is controlled by the ``oslo.messaging`` configuration "
"options for multiple AMQP hosts. If the AMQP node fails, the application "
"reconnects to the next one configured within the specified reconnect "
"interval. The specified reconnect interval constitutes its SLA."
msgstr ""

#: ../shared-messaging.rst:20
msgid ""
"On the infrastructure layer, the SLA is the time for which RabbitMQ cluster "
"reassembles. Several cases are possible. The Mnesia keeper node is the "
"master of the corresponding Pacemaker resource for RabbitMQ. When it fails, "
"the result is a full AMQP cluster downtime interval. Normally, its SLA is no "
"more than several minutes. Failure of another node that is a slave of the "
"corresponding Pacemaker resource for RabbitMQ results in no AMQP cluster "
"downtime at all."
msgstr ""

#: ../shared-messaging.rst:28
msgid ""
"Making the RabbitMQ service highly available involves the following steps:"
msgstr ""

#: ../shared-messaging.rst:30
msgid ":ref:`Install RabbitMQ<rabbitmq-install>`"
msgstr ""

#: ../shared-messaging.rst:32
msgid ":ref:`Configure RabbitMQ for HA queues<rabbitmq-configure>`"
msgstr ""

#: ../shared-messaging.rst:34
msgid ""
":ref:`Configure OpenStack services to use RabbitMQ HA queues <rabbitmq-"
"services>`"
msgstr ""

#: ../shared-messaging.rst:39
msgid ""
"Access to RabbitMQ is not normally handled by HAProxy. Instead, consumers "
"must be supplied with the full list of hosts running RabbitMQ with "
"``rabbit_hosts`` and turn on the ``rabbit_ha_queues`` option. For more "
"information, read the `core issue <http://people.redhat.com/jeckersb/private/"
"vip-failover-tcp-persist.html>`_. For more detail, read the `history and "
"solution <http://greenstack.die.upm.es/2015/03/02/improving-ha-failures-with-"
"tcp-timeouts/>`_."
msgstr ""

#: ../shared-messaging.rst:50
msgid "Install RabbitMQ"
msgstr ""

#: ../shared-messaging.rst:52
msgid ""
"The commands for installing RabbitMQ are specific to the Linux distribution "
"you are using."
msgstr ""

#: ../shared-messaging.rst:55
msgid "For Ubuntu or Debian:"
msgstr ""

#: ../shared-messaging.rst:61
msgid "For RHEL, Fedora, or CentOS:"
msgstr ""

#: ../shared-messaging.rst:67
msgid "For openSUSE:"
msgstr ""

#: ../shared-messaging.rst:73
msgid "For SLES 12:"
msgstr ""

#: ../shared-messaging.rst:83
msgid ""
"For SLES 12, the packages are signed by GPG key 893A90DAD85F9316. You should "
"verify the fingerprint of the imported GPG key before using it."
msgstr ""

#: ../shared-messaging.rst:94
msgid ""
"For more information, see the official installation manual for the "
"distribution:"
msgstr ""

#: ../shared-messaging.rst:97
msgid "`Debian and Ubuntu <https://www.rabbitmq.com/install-debian.html>`_"
msgstr ""

#: ../shared-messaging.rst:98
msgid ""
"`RPM based <https://www.rabbitmq.com/install-rpm.html>`_ (RHEL, Fedora, "
"CentOS, openSUSE)"
msgstr ""

#: ../shared-messaging.rst:104
msgid "Configure RabbitMQ for HA queues"
msgstr ""

#: ../shared-messaging.rst:115
msgid "The following components/services can work with HA queues:"
msgstr ""

#: ../shared-messaging.rst:117
msgid "OpenStack Compute"
msgstr ""

#: ../shared-messaging.rst:118
msgid "OpenStack Block Storage"
msgstr ""

#: ../shared-messaging.rst:119
msgid "OpenStack Networking"
msgstr ""

#: ../shared-messaging.rst:120
msgid "Telemetry"
msgstr ""

#: ../shared-messaging.rst:122
msgid ""
"Consider that, while exchanges and bindings survive the loss of individual "
"nodes, queues and their messages do not because a queue and its contents are "
"located on one node. If we lose this node, we also lose the queue."
msgstr ""

#: ../shared-messaging.rst:126
msgid ""
"Mirrored queues in RabbitMQ improve the availability of service since it is "
"resilient to failures."
msgstr ""

#: ../shared-messaging.rst:129
msgid ""
"Production servers should run (at least) three RabbitMQ servers for testing "
"and demonstration purposes, however it is possible to run only two servers. "
"In this section, we configure two nodes, called ``rabbit1`` and ``rabbit2``. "
"To build a broker, ensure that all nodes have the same Erlang cookie file."
msgstr ""

#: ../shared-messaging.rst:136
msgid ""
"Stop RabbitMQ and copy the cookie from the first node to each of the other "
"node(s):"
msgstr ""

#: ../shared-messaging.rst:143
msgid ""
"On each target node, verify the correct owner, group, and permissions of the "
"file :file:`erlang.cookie`:"
msgstr ""

#: ../shared-messaging.rst:151
msgid ""
"Start the message queue service on all nodes and configure it to start when "
"the system boots. On Ubuntu, it is configured by default."
msgstr ""

#: ../shared-messaging.rst:154
msgid "On CentOS, RHEL, openSUSE, and SLES:"
msgstr ""

#: ../shared-messaging.rst:161
msgid "Verify that the nodes are running:"
msgstr ""

#: ../shared-messaging.rst:172
msgid "Run the following commands on each node except the first one:"
msgstr ""

#: ../shared-messaging.rst:186
msgid ""
"The default node type is a disc node. In this guide, nodes join the cluster "
"as disc nodes. Also, nodes can join the cluster as RAM nodes. For more "
"details about this feature, check `Clusters with RAM nodes <https://www."
"rabbitmq.com/clustering.html#ram-nodes>`_."
msgstr ""

#: ../shared-messaging.rst:191
msgid "Verify the cluster status:"
msgstr ""

#: ../shared-messaging.rst:200
msgid ""
"If the cluster is working, you can create usernames and passwords for the "
"queues."
msgstr ""

#: ../shared-messaging.rst:203
msgid ""
"To ensure that all queues except those with auto-generated names are "
"mirrored across all running nodes, set the ``ha-mode`` policy key to all by "
"running the following command on one of the nodes:"
msgstr ""

#: ../shared-messaging.rst:212
msgid "More information is available in the RabbitMQ documentation:"
msgstr ""

#: ../shared-messaging.rst:214
msgid "`Highly Available Queues <https://www.rabbitmq.com/ha.html>`_"
msgstr ""

#: ../shared-messaging.rst:215
msgid "`Clustering Guide <https://www.rabbitmq.com/clustering.html>`_"
msgstr ""

#: ../shared-messaging.rst:219
msgid ""
"As another option to make RabbitMQ highly available, RabbitMQ contains the "
"OCF scripts for the Pacemaker cluster resource agents since version 3.5.7. "
"It provides the active/active RabbitMQ cluster with mirrored queues. For "
"more information, see `Auto-configuration of a cluster with a Pacemaker "
"<https://www.rabbitmq.com/pacemaker.html>`_."
msgstr ""

#: ../shared-messaging.rst:228
msgid "Configure OpenStack services to use Rabbit HA queues"
msgstr ""

#: ../shared-messaging.rst:230
msgid "Configure the OpenStack components to use at least two RabbitMQ nodes."
msgstr ""

#: ../shared-messaging.rst:232
msgid "Use these steps to configurate all services using RabbitMQ:"
msgstr ""

#: ../shared-messaging.rst:234
msgid ""
"RabbitMQ HA cluster Transport URL using ``[user:pass@]host:port`` format:"
msgstr ""

#: ../shared-messaging.rst:241
msgid ""
"Replace ``RABBIT_USER`` with RabbitMQ username and ``RABBIT_PASS`` with "
"password for respective RabbitMQ host. For more information, see `oslo "
"messaging transport <https://docs.openstack.org/oslo.messaging/latest/"
"reference/transport.html>`_."
msgstr ""

#: ../shared-messaging.rst:246
msgid "Retry connecting with RabbitMQ:"
msgstr ""

#: ../shared-messaging.rst:252
msgid "How long to back-off for between retries when connecting to RabbitMQ:"
msgstr ""

#: ../shared-messaging.rst:258
msgid ""
"Maximum retries with trying to connect to RabbitMQ (infinite by default):"
msgstr ""

#: ../shared-messaging.rst:264
msgid "Use durable queues in RabbitMQ:"
msgstr ""

#: ../shared-messaging.rst:270
msgid "Use HA queues in RabbitMQ (``x-ha-policy: all``):"
msgstr ""

#: ../shared-messaging.rst:278
msgid ""
"If you change the configuration from an old set-up that did not use HA "
"queues, restart the service:"
msgstr ""

#: ../shared-services.rst:3
msgid "Configuring the shared services"
msgstr ""

#: ../shared-services.rst:5
msgid ""
"This chapter describes the shared services for high availability, such as "
"database, messaging service."
msgstr ""

#: ../storage-ha-backend.rst:6
msgid "Storage back end"
msgstr ""

#: ../storage-ha-backend.rst:8
msgid "An OpenStack environment includes multiple data pools for the VMs:"
msgstr ""

#: ../storage-ha-backend.rst:10
msgid ""
"Ephemeral storage is allocated for an instance and is deleted when the "
"instance is deleted. The Compute service manages ephemeral storage and by "
"default, Compute stores ephemeral drives as files on local disks on the "
"compute node. As an alternative, you can use Ceph RBD as the storage back "
"end for ephemeral storage."
msgstr ""

#: ../storage-ha-backend.rst:16
msgid ""
"Persistent storage exists outside all instances. Two types of persistent "
"storage are provided:"
msgstr ""

#: ../storage-ha-backend.rst:19
msgid ""
"The Block Storage service (cinder) that can use LVM or Ceph RBD as the "
"storage back end."
msgstr ""

#: ../storage-ha-backend.rst:21
msgid ""
"The Image service (glance) that can use the Object Storage service (swift) "
"or Ceph RBD as the storage back end."
msgstr ""

#: ../storage-ha-backend.rst:24
msgid ""
"For more information about configuring storage back ends for the different "
"storage options, see `Manage volumes <https://docs.openstack.org/admin-guide/"
"blockstorage-manage-volumes.html>`_ in the OpenStack Administrator Guide."
msgstr ""

#: ../storage-ha-backend.rst:29
msgid ""
"This section discusses ways to protect against data loss in your OpenStack "
"environment."
msgstr ""

#: ../storage-ha-backend.rst:33
msgid "RAID drives"
msgstr ""

#: ../storage-ha-backend.rst:35
msgid ""
"Configuring RAID on the hard drives that implement storage protects your "
"data against a hard drive failure. If the node itself fails, data may be "
"lost. In particular, all volumes stored on an LVM node can be lost."
msgstr ""

#: ../storage-ha-backend.rst:40
msgid "Ceph"
msgstr ""

#: ../storage-ha-backend.rst:42
msgid ""
"`Ceph RBD <https://ceph.com/>`_ is an innately high availability storage "
"back end. It creates a storage cluster with multiple nodes that communicate "
"with each other to replicate and redistribute data dynamically. A Ceph RBD "
"storage cluster provides a single shared set of storage nodes that can "
"handle all classes of persistent and ephemeral data (glance, cinder, and "
"nova) that are required for OpenStack instances."
msgstr ""

#: ../storage-ha-backend.rst:49
msgid ""
"Ceph RBD provides object replication capabilities by storing Block Storage "
"volumes as Ceph RBD objects. Ceph RBD ensures that each replica of an object "
"is stored on a different node. This means that your volumes are protected "
"against hard drive and node failures, or even the failure of the data center "
"itself."
msgstr ""

#: ../storage-ha-backend.rst:55
msgid ""
"When Ceph RBD is used for ephemeral volumes as well as block and image "
"storage, it supports `live migration <https://docs.openstack.org/admin-guide/"
"compute-live-migration-usage.html>`_ of VMs with ephemeral drives. LVM only "
"supports live migration of volume-backed VMs."
msgstr ""

#: ../storage-ha-block.rst:3
msgid "Highly available Block Storage API"
msgstr ""

#: ../storage-ha-block.rst:5
msgid ""
"Cinder provides Block-Storage-as-a-Service suitable for performance "
"sensitive scenarios such as databases, expandable file systems, or providing "
"a server with access to raw block level storage."
msgstr ""

#: ../storage-ha-block.rst:9
msgid ""
"Persistent block storage can survive instance termination and can also be "
"moved across instances like any external storage device. Cinder also has "
"volume snapshots capability for backing up the volumes."
msgstr ""

#: ../storage-ha-block.rst:13
msgid ""
"Making the Block Storage API service highly available in active/passive mode "
"involves:"
msgstr ""

#: ../storage-ha-block.rst:16
msgid ":ref:`ha-blockstorage-pacemaker`"
msgstr ""

#: ../storage-ha-block.rst:17
msgid ":ref:`ha-blockstorage-configure`"
msgstr ""

#: ../storage-ha-block.rst:18
msgid ":ref:`ha-blockstorage-services`"
msgstr ""

#: ../storage-ha-block.rst:20
msgid ""
"In theory, you can run the Block Storage service as active/active. However, "
"because of sufficient concerns, we recommend running the volume component as "
"active/passive only."
msgstr ""

#: ../storage-ha-block.rst:24
msgid ""
"You can read more about these concerns on the `Red Hat Bugzilla <https://"
"bugzilla.redhat.com/show_bug.cgi?id=1193229>`_ and there is a `psuedo "
"roadmap <https://etherpad.openstack.org/p/cinder-kilo-stabilisation-work>`_ "
"for addressing them upstream."
msgstr ""

#: ../storage-ha-block.rst:33
msgid "Add Block Storage API resource to Pacemaker"
msgstr ""

#: ../storage-ha-block.rst:35
msgid ""
"On RHEL-based systems, create resources for cinder's systemd agents and "
"create constraints to enforce startup/shutdown ordering:"
msgstr ""

#: ../storage-ha-block.rst:50
msgid ""
"If the Block Storage service runs on the same nodes as the other services, "
"then it is advisable to also include:"
msgstr ""

#: ../storage-ha-block.rst:57
msgid ""
"Alternatively, instead of using systemd agents, download and install the OCF "
"resource agent:"
msgstr ""

#: ../storage-ha-block.rst:66
msgid ""
"You can now add the Pacemaker configuration for Block Storage API resource. "
"Connect to the Pacemaker cluster with the :command:`crm configure` command "
"and add the following cluster resources:"
msgstr ""

#: ../storage-ha-block.rst:80
msgid ""
"This configuration creates ``p_cinder-api``, a resource for managing the "
"Block Storage API service."
msgstr ""

#: ../storage-ha-block.rst:83
msgid ""
"The command :command:`crm configure` supports batch input, copy and paste "
"the lines above into your live Pacemaker configuration and then make changes "
"as required. For example, you may enter ``edit p_ip_cinder-api`` from the :"
"command:`crm configure` menu and edit the resource to match your preferred "
"virtual IP address."
msgstr ""

#: ../storage-ha-block.rst:89
msgid ""
"Once completed, commit your configuration changes by entering :command:"
"`commit` from the :command:`crm configure` menu. Pacemaker then starts the "
"Block Storage API service and its dependent resources on one of your nodes."
msgstr ""

#: ../storage-ha-block.rst:96
msgid "Configure Block Storage API service"
msgstr ""

#: ../storage-ha-block.rst:98
msgid ""
"Edit the ``/etc/cinder/cinder.conf`` file. For example, on a RHEL-based "
"system:"
msgstr ""

#: ../storage-ha-block.rst:139
msgid ""
"Replace ``CINDER_DBPASS`` with the password you chose for the Block Storage "
"database. Replace ``CINDER_PASS`` with the password you chose for the "
"``cinder`` user in the Identity service."
msgstr ""

#: ../storage-ha-block.rst:143
msgid ""
"This example assumes that you are using NFS for the physical storage, which "
"will almost never be true in a production installation."
msgstr ""

#: ../storage-ha-block.rst:146
msgid ""
"If you are using the Block Storage service OCF agent, some settings will be "
"filled in for you, resulting in a shorter configuration file:"
msgstr ""

#: ../storage-ha-block.rst:167
msgid ""
"Replace ``CINDER_DBPASS`` with the password you chose for the Block Storage "
"database."
msgstr ""

#: ../storage-ha-block.rst:173
msgid ""
"Configure OpenStack services to use the highly available Block Storage API"
msgstr ""

#: ../storage-ha-block.rst:175
msgid ""
"Your OpenStack services must now point their Block Storage API configuration "
"to the highly available, virtual cluster IP address rather than a Block "
"Storage API server’s physical IP address as you would for a non-HA "
"environment."
msgstr ""

#: ../storage-ha-block.rst:179
msgid "Create the Block Storage API endpoint with this IP."
msgstr ""

#: ../storage-ha-block.rst:181
msgid ""
"If you are using both private and public IP addresses, create two virtual "
"IPs and define your endpoint. For example:"
msgstr ""

#: ../storage-ha-file-systems.rst:3
msgid "Highly available Shared File Systems API"
msgstr ""

#: ../storage-ha-file-systems.rst:5
msgid ""
"Making the Shared File Systems (manila) API service highly available in "
"active/passive mode involves:"
msgstr ""

#: ../storage-ha-file-systems.rst:8
msgid ":ref:`ha-sharedfilesystems-pacemaker`"
msgstr ""

#: ../storage-ha-file-systems.rst:9
msgid ":ref:`ha-sharedfilesystems-configure`"
msgstr ""

#: ../storage-ha-file-systems.rst:10
msgid ":ref:`ha-sharedfilesystems-services`"
msgstr ""

#: ../storage-ha-file-systems.rst:15
msgid "Add Shared File Systems API resource to Pacemaker"
msgstr ""

#: ../storage-ha-file-systems.rst:17 ../storage-ha-image.rst:27
msgid "Download the resource agent to your system:"
msgstr ""

#: ../storage-ha-file-systems.rst:25
msgid ""
"Add the Pacemaker configuration for the Shared File Systems API resource. "
"Connect to the Pacemaker cluster with the following command:"
msgstr ""

#: ../storage-ha-file-systems.rst:35
msgid ""
"The :command:`crm configure` supports batch input. Copy and paste the lines "
"in the next step into your live Pacemaker configuration and then make "
"changes as required."
msgstr ""

#: ../storage-ha-file-systems.rst:39
msgid ""
"For example, you may enter ``edit p_ip_manila-api`` from the :command:`crm "
"configure` menu and edit the resource to match your preferred virtual IP "
"address."
msgstr ""

#: ../storage-ha-file-systems.rst:55
msgid ""
"This configuration creates ``p_manila-api``, a resource for managing the "
"Shared File Systems API service."
msgstr ""

#: ../storage-ha-file-systems.rst:58 ../storage-ha-image.rst:66
msgid ""
"Commit your configuration changes by entering the following command from "
"the :command:`crm configure` menu:"
msgstr ""

#: ../storage-ha-file-systems.rst:65
msgid ""
"Pacemaker now starts the Shared File Systems API service and its dependent "
"resources on one of your nodes."
msgstr ""

#: ../storage-ha-file-systems.rst:71
msgid "Configure Shared File Systems API service"
msgstr ""

#: ../storage-ha-file-systems.rst:73
msgid "Edit the :file:`/etc/manila/manila.conf` file:"
msgstr ""

#: ../storage-ha-file-systems.rst:92
msgid "Configure OpenStack services to use HA Shared File Systems API"
msgstr ""

#: ../storage-ha-file-systems.rst:94
msgid ""
"Your OpenStack services must now point their Shared File Systems API "
"configuration to the highly available, virtual cluster IP address rather "
"than a Shared File Systems API server’s physical IP address as you would for "
"a non-HA environment."
msgstr ""

#: ../storage-ha-file-systems.rst:99
msgid "You must create the Shared File Systems API endpoint with this IP."
msgstr ""

#: ../storage-ha-file-systems.rst:101
msgid ""
"If you are using both private and public IP addresses, you should create two "
"virtual IPs and define your endpoints like this:"
msgstr ""

#: ../storage-ha-image.rst:3
msgid "Highly available Image API"
msgstr ""

#: ../storage-ha-image.rst:5
msgid ""
"The OpenStack Image service offers a service for discovering, registering, "
"and retrieving virtual machine images. To make the OpenStack Image API "
"service highly available in active/passive mode, you must:"
msgstr ""

#: ../storage-ha-image.rst:9
msgid ":ref:`glance-api-pacemaker`"
msgstr ""

#: ../storage-ha-image.rst:10
msgid ":ref:`glance-api-configure`"
msgstr ""

#: ../storage-ha-image.rst:11
msgid ":ref:`glance-services`"
msgstr ""

#: ../storage-ha-image.rst:16
msgid ""
"Before beginning, ensure that you are familiar with the documentation for "
"installing the OpenStack Image API service. See the *Image service* section "
"in the `Installation Guides <https://docs.openstack.org/ocata/install/>`_, "
"depending on your distribution."
msgstr ""

#: ../storage-ha-image.rst:25
msgid "Add OpenStack Image API resource to Pacemaker"
msgstr ""

#: ../storage-ha-image.rst:35
msgid ""
"Add the Pacemaker configuration for the OpenStack Image API resource. Use "
"the following command to connect to the Pacemaker cluster:"
msgstr ""

#: ../storage-ha-image.rst:44
msgid ""
"The :command:`crm configure` command supports batch input. Copy and paste "
"the lines in the next step into your live Pacemaker configuration and then "
"make changes as required."
msgstr ""

#: ../storage-ha-image.rst:48
msgid ""
"For example, you may enter ``edit p_ip_glance-api`` from the :command:`crm "
"configure` menu and edit the resource to match your preferred virtual IP "
"address."
msgstr ""

#: ../storage-ha-image.rst:63
msgid ""
"This configuration creates ``p_glance-api``, a resource for managing the "
"OpenStack Image API service."
msgstr ""

#: ../storage-ha-image.rst:73
msgid ""
"Pacemaker then starts the OpenStack Image API service and its dependent "
"resources on one of your nodes."
msgstr ""

#: ../storage-ha-image.rst:79
msgid "Configure OpenStack Image service API"
msgstr ""

#: ../storage-ha-image.rst:81
msgid ""
"Edit the :file:`/etc/glance/glance-api.conf` file to configure the OpenStack "
"Image service:"
msgstr ""

#: ../storage-ha-image.rst:104
msgid "[TODO: need more discussion of these parameters]"
msgstr ""

#: ../storage-ha-image.rst:109
msgid ""
"Configure OpenStack services to use the highly available OpenStack Image API"
msgstr ""

#: ../storage-ha-image.rst:111
msgid ""
"Your OpenStack services must now point their OpenStack Image API "
"configuration to the highly available, virtual cluster IP address instead of "
"pointing to the physical IP address of an OpenStack Image API server as you "
"would in a non-HA cluster."
msgstr ""

#: ../storage-ha-image.rst:116
msgid ""
"For example, if your OpenStack Image API service IP address is 10.0.0.11 (as "
"in the configuration explained here), you would use the following "
"configuration in your :file:`nova.conf` file:"
msgstr ""

#: ../storage-ha-image.rst:128
msgid ""
"You must also create the OpenStack Image API endpoint with this IP address. "
"If you are using both private and public IP addresses, create two virtual IP "
"addresses and define your endpoint. For example:"
msgstr ""

#: ../storage-ha.rst:3
msgid "Configuring storage"
msgstr ""

#: ../storage-ha.rst:13
msgid ""
"Making the Block Storage (cinder) API service highly available in active/"
"active mode involves:"
msgstr ""

#: ../storage-ha.rst:16
msgid "Configuring Block Storage to listen on the VIP address"
msgstr ""

#: ../storage-ha.rst:18
msgid ""
"Managing the Block Storage API daemon with the Pacemaker cluster manager"
msgstr ""

#: ../storage-ha.rst:20
msgid "Configuring OpenStack services to use this IP address"
msgstr ""
