The project contains a Vagrant config file, and Puppet manifests that together with an appropriate basebox will create a VM setup to build RPMS and host them on a Custom Yum repository
We have a site with a hardware ssl accelerator wich routes http traffic to port 80 and decrypted https traffic (so back to http) to port 443. We wanted varnish to cache the 443 traffic, and I came up with this proof of concept config, in reality you’d want to have a bunch of different rules for your https site to ensure you cache only what you want to.
If you want to create custom rpms and install then with the usual automated dependency management you’ll need your own yum repository. This is just the RPMS and metadata in the format of static xml files served by a webserver.
I’ve been using Puppet for a little while and am now working on a project that will be using RedHat’s Satellite (the upstream project is Spacewalk).
I haven’t really used puppet in anger on production systems yet, I’m referring to the open source edition of Puppet, and have only read about Satellite, but I didn’t find much comparison out there so thought it worth writing up what I’ve found.
I originally went down a manual route as I wanted to understand the process, and since I’m familiar with manual installs this was the easiest path for me at the time.
My build script has been getting more complex lately and I’m quite pleased with it.
We tend to have several versions of code on the go, version x is live, x+1 is in UAT, and x+2 is in development. With all these versions around it’s important to keep track of changelogs, and to upgrade correctly x to x+1, and then x+1 to x+2 as we have found that going direct from x to x+2 can fail to uncover some bugs. Specifically this happens if a drupal update hook gets edited after it has been released to the client, but before it has run on live. Our builds always start from a copy of the live site.
On a project I’m working on at the moment we have a problem, files are going missing.
We don’t know which part of the system could be trashing these files (user uploaded images in this case) and they are on a shared filesystem so there are plenty of places to point fingers.