Configs updated 2017/2/28 to reflect what I'm using now.
As part of my continuing investigation into QuintupleO, I've been playing around with multi-node devstack to see how QuintupleO works across multiple compute hosts. The good news is that the answer is "quite well" (another blog post on that topic is probably needed as well), and since some of the devstack multi-host documentation I found through Google was a bit out of date, I thought I'd go ahead and post what I've been using for my devstack setup.
This paragraph should be obsolete, but I'm leaving it for historical reference. First, I'm using Fedora 21 Server installs on a couple of 1U servers. This is noteworthy because Fedora 21 has some iptables things that affect devstack. First is that on my standard installs (I didn't change any package settings at install time), the iptables-services package was not available, which caused devstack to flat-out fail. I've posted a patch to fix that, so by the time you read this it may no longer be an issue, but it's something to keep in mind (the workaround is sudo yum install -y iptables-services
before running stack.sh). The other is that the default Fedora iptables configuration breaks multi-node because it won't allow things like rabbitmq traffic from the compute node. Because I'm in an isolated development environment, I just did sudo systemctl stop iptables
and it took care of the problem for me.
It's also a good idea to set SELinux to permissive in /etc/selinux/config. Devstack does setenforce 0
for you when you stack.sh, but if you rejoin-stack after a reboot it isn't persistent.
I also found that in order to ssh to guests on the compute node, I had to set the mtu on the bridged ethernet interface to something higher than the default 1500. I went with 9000 since all of my hardware supports jumbo frames. This can be done with ifconfig [interface] mtu 9000
(Edit: This command was originally missing the interface part because I used something that looked like an HTML tag as the placeholder). I believe this has something to do with tunneling overhead, but that's the extent of my knowledge on the subject. ;-)
Finally, an appropriate localrc is needed on both the controller and compute nodes. You can find the ones I'm using below (note that I'm only enabling things I need for QuintupleO).
Update 2017/2/28: Because of the nova cells work, it is now necessary to run devstack/tools/discover_hosts.sh after deploying the compute node.
DATABASE_PASSWORD=password RABBIT_PASSWORD=password SERVICE_TOKEN=token SERVICE_PASSWORD=password ADMIN_PASSWORD=password HOST_IP=11.1.1.78 MULTI_HOST=True IP_VERSION=4 # I don't want config drive for what I'm doing. Feel free to omit this one. FORCE_CONFIG_DRIVE=False disable_service cinder c-api c-vol c-sch c-bak enable_plugin heat git://git.openstack.org/openstack/heat enable_service h-eng h-api h-api-cfn h-api-cw disable_service n-net enable_service quantum q-svc q-agt q-dhcp q-l3 q-meta q-lbaas q-vpn q-fwaas q-metering disable_service tempest
DATABASE_PASSWORD=password RABBIT_PASSWORD=password SERVICE_TOKEN=token SERVICE_PASSWORD=password ADMIN_PASSWORD=password ENABLED_SERVICES=n-cpu,placement-client,q-agt MULTI_HOST=True # Should be the IP of the compute node, and will be different for each compute node HOST_IP=11.1.1.89 # These should all be the IP of the controller node SERVICE_HOST=11.1.1.78 MYSQL_HOST=11.1.1.78 RABBIT_HOST=11.1.1.78 GLANCE_HOSTPORT=11.1.1.78:9292 # Needed these settings for the Horizon VNC console to work on compute node guests NOVA_VNC_ENABLED=True VNCSERVER_LISTEN=$HOST_IP VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN # I don't want config drive for what I'm doing. Feel free to omit this one. FORCE_CONFIG_DRIVE=False