Snozberry.Org

Tech Ramblings

Puppet 3.x Install Script

| Comments

Overview

This script is used to install/configure a basic puppet master on a RedHat/CentOS system.

During the installation two repositories are added EPEL and PUPPETLABS. The repositories are disabled during installation.

After installing the required packages the includepkgs parameter is added and the specific repositories are re-enabled. This will allow you to pull updates for these specific packages.

Please leave any suggestion or hints in the comments below.

Puppet Install
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
#!/bin/bash

# Install Puppet 3.x on Centos 6.x

#############
# Variables #
#############

elv=`cat /etc/redhat-release | gawk 'BEGIN {FS="release "} {print $2}' | gawk 'BEGIN {FS="."} {print $1}'`
arch=`uname -m`
fqdn=`hostname -f`

##############
# Functions  #
##############

disable_repo() {
        local conf=/etc/yum.repos.d/$1.repo
        if [ ! -e "$conf" ]; then
                echo "Yum repo config $conf not found -- exiting."
                exit 1
        else
                sudo sed -i -e 's/^enabled.*/enabled=0/g' $conf
        fi
}

include_repo_packages() {
  local conf=/etc/yum.repos.d/$1.repo
  if [ ! -e "$conf" ]; then
                echo "Yum repo config $conf not found -- exiting."
                exit 1
        else
      shift
     sudo sed -i -e "/\[$1\]/ a\includepkgs=$2" ${conf}
      sudo sed -i -i "/\[$1\]/,/\]/ s/^enabled.*/enabled=1/" ${conf}
  fi
}

enable_service() {
        try sudo /sbin/chkconfig $1 on
        try sudo /sbin/service $1 start
}

disable_service() {
        try sudo /sbin/chkconfig $1 off
        try sudo /sbin/service $1 stop
}


# Stop/Disable SELinux (Premissive Mode)
sudo /usr/sbin/setenforce 0

# Stop/Disable IPTables v4/v6
disable_service iptables
disable_service ip6tables

# Add Puppet Labs YUM repository
cat >> /etc/yum.repos.d/puppetlabs.repo << EOF
[puppetlabs]
name=Puppet Labs Packages
baseurl=http://yum.puppetlabs.com/el/\$releasever/products/\$basearch/
enabled=1
gpgcheck=1
gpgkey=http://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs
EOF

# Disable Puppet Labs YUM repository
disable_repo puppetlabs

# Add EPEL YUM repository
epel_rpm_url=http://dl.fedoraproject.org/pub/epel/$elv/$arch
sudo wget -4 -r -l1 --no-parent -A 'epel-release*.rpm' $epel_rpm_url
sudo yum -y --nogpgcheck localinstall dl.fedoraproject.org/pub/epel/$elv/$arch/epel-*.rpm
sudo rm -rf dl.fedoraproject.org

# Disable EPEL YUM repository
disable_repo epel

# Install Ruby prerequisites
# Packages from EPEL: ruby-augeas rubygem-json
sudo yum --enablerepo=epel -y install ruby ruby-lib ruby-rdoc ruby-augeas ruby-irb ruby-shadow rubygem-json rubygems libselinux-ruby

# Install Puppet Server
# Packages from PUPPETLABS: puppet puppet-server facter hiera
sudo yum --enablerepo=puppetlabs --enablerepo=epel -y install puppet puppet-server

# Start the puppetmaster service to create SSL certificate
/etc/init.d/puppetmaster start

# Stop/Disable the puppet master service as it will be controled via passenger.
disable_service puppetmaster

# Install Passenger Apache Module ( Because Webbrick...really?)
# Packages from EPEL: mod_passenger rubygem-passenger rubygem-passenger-native rubygem-passenger-navtive-libs libev rubygem-fastthread rubygem-rack
sudo yum --enablerepo=puppetlabs --enablerepo=epel install rubygem-passenger rubygem-passenger-native rubygem-passenger-native-libs mod_passenger

# Configure the Apache conf.d for passenger
cat >> /etc/httpd/conf.d/puppetmaster.conf << EOF
# you probably want to tune these settings
PassengerHighPerformance on
PassengerMaxPoolSize 12
PassengerPoolIdleTime 1500
# PassengerMaxRequests 1000
PassengerStatThrottleRate 120
RackAutoDetect Off
RailsAutoDetect Off

Listen 8140

<VirtualHost *:8140>
        SSLEngine on
        SSLProtocol -ALL +SSLv3 +TLSv1
        SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP

        SSLCertificateFile      /var/lib/puppet/ssl/certs/${fqdn}.pem
        SSLCertificateKeyFile   /var/lib/puppet/ssl/private_keys/${fqdn}.pem
        SSLCertificateChainFile /var/lib/puppet/ssl/ca/ca_crt.pem
        SSLCACertificateFile    /var/lib/puppet/ssl/ca/ca_crt.pem
        # If Apache complains about invalid signatures on the CRL, you can try disabling
        # CRL checking by commenting the next line, but this is not recommended.
        SSLCARevocationFile     /var/lib/puppet/ssl/ca/ca_crl.pem
        SSLVerifyClient optional
        SSLVerifyDepth  1
        SSLOptions +StdEnvVars

        RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e
        RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e
        RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e

        DocumentRoot /etc/puppet/rack/public/
        RackBaseURI /
        RailsEnv production
        <Directory /etc/puppet/rack/>
                Options None
                AllowOverride None
                Order allow,deny
                allow from all
        </Directory>
</VirtualHost>
EOF

# Create Ruby Rack within Puppet directory strucutre for ease of management.
sudo mkdir /etc/puppet/rack
sudo mkdir /etc/puppet/rack/public
sudo mkdir /etc/puppet/rack/tmp
sudo cp /usr/share/puppet/ext/rack/files/config.ru /etc/puppet/rack
sudo chown puppet:root /etc/puppet/rack/config.ru
sudo chmod 644 /etc/puppet/rack/config.ru

# Install Apache SSL (mod_ssl)
sudo yum install mod_ssl

# Start/Enable apache service (httpd)
enable_service httpd

# Enable/Include just required packages from EPEL
# include_repo_packages <repo conf file> <repo name> <"package list">
include_repo_packages epel epel "mod_passenger rubygem-passenger rubygem-passenger-native rubygem-passenger-navtive-libs libev rubygem-fastthread rubygem-rack ruby-augeas rubygem-json"

# Enable/Include just required packages from PUPPETLABS
include_repo_packages puppetlabs puppetlabs "puppet puppet-server facter hiera"

CoroSync/Pacemaker on Centos 6

| Comments

Install Pacemaker/Corosync

From my readings online you can also use heartbeat 3.x along side packmaker to achive similar results. I”ve decided to go with Corosync as its backed by RedHat and Suse and looks to have more active development. Not to memtion that the Pacemaker projects say you should now use Corosync :)

There are packages included in the Centos 6.x base/updates repositories so we can just use yum to installed the needed packages.

1
yum install pacemaker corosync

Setup Corosync

Generate AuthKey

Corosync requires an authkey for communication within its cluster. This file must be copied to each of the nodes that you want to add to the cluseter.

If a message “Invalid digest” appears from the corosync executive, the keys are not consistent between nodes

To generate the authkey Corosync has a utility corosync-keygen. Invoke this command as the root users to generate the authkey. The key will be generated at /etc/corosync/authkey

Grab a cup of coffee this process takes a while to complete as it pulls from the more secure /dev/random. You don’t have to press anything on the keyboard it will still generate the authkey.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
sudo corosync-keygen 
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Press keys on your keyboard to generate entropy (bits = 128).
Press keys on your keyboard to generate entropy (bits = 192).
Press keys on your keyboard to generate entropy (bits = 256).
Press keys on your keyboard to generate entropy (bits = 328).
Press keys on your keyboard to generate entropy (bits = 392).
Press keys on your keyboard to generate entropy (bits = 456).
Press keys on your keyboard to generate entropy (bits = 520).
Press keys on your keyboard to generate entropy (bits = 592).
Press keys on your keyboard to generate entropy (bits = 656).
Press keys on your keyboard to generate entropy (bits = 720).
Press keys on your keyboard to generate entropy (bits = 784).
Press keys on your keyboard to generate entropy (bits = 848).
Press keys on your keyboard to generate entropy (bits = 912).
Press keys on your keyboard to generate entropy (bits = 976).
Writing corosync key to /etc/corosync/authkey.

Now you just need to copy this authkey to the other nodes in your cluster

1
sudo scp /etc/corosync/authkey root@<node2>:/etc/corosync/

Configure corosync.conf

All changes listed below will need to be performed on ALL nodes in the cluster.

The first thing we’ll need to do is copy the corosync.conf.example file to corosync.conf. I’ll be using the udp configuration here as we’ll only have two nodes.

1
cp /etc/corosync/corosync.conf.example.udpu /etc/corosynccorosync.conf

Now we’ll edit this file to set the the user corosync will run as. This is nessasary so that corosync can manage the pacemaker resources.

1
sudo vim /etc/corosync/corosyn.conf

Add the following to the top of the corosync.conf file.

1
2
3
4
5
aisexec {
        # Run as root - this is necessary to be able to manage resources with Pacemaker
        user:        root
        group:       root
}

Edit the totem section to include the members in your cluster and set the bindnetaddr that corosync will listen on. You can leave the other settings default for now.

Add cluster members

1
2
3
4
5
6
7
interface {
                member {
                        memberaddr: 10.1.22.28
                }
                member {
                        memberaddr: 10.1.22.29
                }

Set bindnetaddr, this will unique per node in the cluster

1
bindnetaddr: 10.1.22.28

Create pcmk service.d file

Now we’ll create a pacemaker service.d file to tell corosync to control/run the pacemaker resoucres.

1
sudo vim /etc/corosync/service.d/pcmk

Add the following into the file you just created.

Change the ver: to 1 will allow you to start the pacemaker service manually for trouble shooting

1
2
3
4
5
service {
# Load the Pacemaker Cluster Resource Manager
name: pacemaker
ver: 0
}

Start/Verify Corosync is correcly configured

Now lets start corosync on the first node in the cluster

1
sudo /etc/init.d/corosync start

Check to see if corosync is running as expected

1
2
sudo /etc/init.d/corosync status
corosync (pid  18376) is running...

or

1
2
3
4
5
6
7
8
9
10
11
sudo crm_mon
#
# Output from crm_mon 
============
Last updated: Wed May  2 07:51:20 2012
Last change: 
Current DC: NONE
0 Nodes configured, unknown expected votes
0 Resources configured.
============
Online: [ pg1.stage.net ]

This the first node up and running you can now start the second node.

Configure Active/Passive Cluster

The first step is to check the cluster configureation using crm_verify -L.

1
2
3
4
5
6
sudo crm_verify -L
crm_verify[19478]: 2012/05/02_07:52:37 ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
crm_verify[19478]: 2012/05/02_07:52:37 ERROR: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
crm_verify[19478]: 2012/05/02_07:52:37 ERROR: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid
  -V may provide more details

You’ll notice that you see a few errors, this is because by default pacemaker is set to make use of STONITH (Shoot The Other Node In The Head). For now we can disable this for our basic configuration.

1
sudo crm configure property stonith-enabled=false

Running crm_verify -L again will now complete without any errors.

Adding ClusterIP Resource

The first thing we need to do for a cluster is add a resource like an IP address so we can always contact and communicate with the cluster without regardless of where the cluster services are running. This must be a NEW address not associated with ANY node.

In the below example you’ll need to set the ip, cidr_netmask to the address for your cluster. You can also set the monitor interval to a lower number if you want a quicker failover. I have set mine to 1s so failover is almost instantaneous

1
2
3
4
5
crm configure primitive ClusterIP ocf:heartbeat:IPaddr2 \
params ip=172.25.3.20 cidr_netmask=21 \
op monitor interval=30s
# Output:
crm_verify[19566]: 2012/05/02_08:04:21 WARN: cluster_status: We do not have quorum - fencing and resource management disabled

View/verify that the ClusterIP has been added

1
2
3
4
5
6
7
8
9
10
sudo crm configure show
node pg1.stage.net
primitive ClusterIP ocf:heartbeat:IPaddr2 \
  params ip="172.25.3.20" cidr_netmask="21" \
  op monitor interval="30s"
property $id="cib-bootstrap-options" \
  dc-version="1.1.6-3.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558" \
  cluster-infrastructure="openais" \
  expected-quorum-votes="2" \
  stonith-enabled="false"

Because we are setting up a 2 node cluster which is mathematically unable to attain quorum, we need to tell Pacemaker to ignore it.

1
sudo crm configure property no-quorum-policy=ignore

Now verify quorum is disabled

1
2
3
4
5
6
7
8
9
10
11
sudo crm configure show
node cloo.arin.net
primitive ClusterIP ocf:heartbeat:IPaddr2 \
  params ip="172.25.3.20" cidr_netmask="21" \
  op monitor interval="30s"
property $id="cib-bootstrap-options" \
  dc-version="1.1.6-3.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558" \
  cluster-infrastructure="openais" \
  expected-quorum-votes="2" \
  stonith-enabled="false" \
  no-quorum-policy="ignore"

Resources

Bits & Bytes of Life

Clusters from Scratch

Puppet Group Management Module

| Comments

Overview

The current group reference type allows for creation on deletion of system groups but does not allow you to manage membership with in these groups.

To work around this limitation the following puppet module groupmanagement has been created to allow group membership management.

Manifests

The module consists of 2 manifest main (add_to_group.pp, init.pp). In addition to these main modules included is also a sample group module (boba.pp)

Process Flow

With the manifests in this module you have the option of managing just groups (creation/deletion) or additionally managing membership of managed groups. Below is the process flow of manifests called starting with the site.pp within your puppet environment.

The flows assume you have created a .pp for the group you are managing

Adding a group

site.pp –> .pp –> init.pp

Adding memberes to a group

site.pp –> groupmanagement::add_to_group.pp

Default Manifest - init.pp (add_group)

This is the default manifest for creating a group. This manifest uses the group reference type to create and verify an group is added.

This manifest is not called directly in site.pp but called from the .pp or add_to_group.pp manifests.

1
2
3
4
5
6
7
8
9
class groupmanagement {
  define add_group ( $gid, $status = "present" ) {
    $groupname=$title
    group { $groupname:
      ensure => $status,
      gid    => $gid,
    }
  }
}

Group Manifest (<groupname>.pp)

You can use this manifest as a template for creating a manifest per group that you wish to manage.

There is an assumption that for sanity you maintain consistant GIDs for groups in your environment.

Parameters:

  • status: Create or remove group. D“efaults to creation
    • <present/absent>
  • gid: Group ID
    • <gid>
  • title: name of the group you wish to manage.
    • <groupname>

You will also need to modify the class & file.pp name to the group you wish to manage also.

1
2
3
4
5
6
class groupmanagement::boba ($status = "present"){
  groupmanagement::add_group { "boba":
    gid    => "402",
    status => $status,
  }
}

Group Membership Manifest (groupmanagement::add_to_group)

This manifest is used to manage the membership of a group. It will create the group if it does not already exist and will add users to a group if they are not already a memeber.

This moudle has a requirement that agueas is installed on the remote node you wish to manage groups on.

Currently this module does NOT remove users from a group if they are not included in the array.

Parameters:

  • group: name of the group you are trying to manage
  • users: array of the users that you want added to the group
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class groupmanagement::add_to_group ( $users, $group ){
  class {"groupmanagement::$group": } ->
  add_to_group { $users:
    group => $group,
  }
  define add_to_group ( $group ) {
    $user = $name
    augeas { "add-${user}-to-group-${group}":
      context => "/files/etc/group/$group",
      changes => [
        "ins user after *[self::gid or self::user][last()]",
        "set user[last()] $user",
      ],
      onlyif => "match user[. = '$user'] size == 0",
    }
  }
}

Setup PGPool-II

| Comments

Configure/Add Postgres Repo

  1. Download the Centos 6 Repo RPM from HERE
1
wget http://yum.pgrpms.org/9.1/redhat/rhel-6-x86_64/pgdg-redhat91-9.1-5.noarch.rpm
  1. Install the Repo RPM
1
sudo rpm -Uvh pgdg-redhat91-9.1-5.noarch.rpm

Install PGPool-II

Now that you have the repo install you can use YUM to install pgpool-II from the PG repo

1
sudo yum install pgpool-II-91

Configure PGPool-II

PGPool-II stores its configuration files in /etc/pgpool-II-91/. Installing the RPM will create sample configuration files.

Configure pgpool.conf

Copy the /etc/pgpool-II-91/pgpool.conf.sample to /etc/pgpool-II-91/pgpool.conf

1
cp /etc/pgpool-II-91/pgpool.conf.sample /etc/pgpool-II-91/pgpool.conf

~ Connection Settings ~

By default PGPool-II only accepts connections from the localhost using port 9999. If you wish to receive conenctions from other hosts, set listen_addresses to ‘*’.

1
2
3
4
5
From:
listen_addresses = 'localhost'

To:
listen_addresses = '*'

~ Backend Connection Settings ~

This section provides deatils about the nodes that PGPool-II is aware of and how it should interface with them. I’ll be show basic settings here for a 2 node “cluster”.

I’m assuming that PGPool-II is installed on the same host as your Postgresql server. To give a better example I’ll be using the hostname of the first node that has both postgresql and pgpool-II in place of using “localhost”.

Notice below that backend_hostname”X” has an incrementing numerial identifier for each additional node ( Example - node1 = 0, node2 = 1, node2 = 3).

Also notice that the backend_port”X” setting is set to the default postgresql port 5432 for every node.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
backend_hostname0 = 'pg1.domain.net'
                                   # Host name or IP address to connect to for backend 0
backend_port0 = 5432
                                   # Port number for backend 0
backend_weight0 = 1
                                   # Weight for backend 0 (only in load balancing mode)
backend_data_directory0 = '/var/lib/pgsql/9.1/data'
                                   # Data directory for backend 0
backend_flag0 = 'ALLOW_TO_FAILOVER'
                                   # Controls various backend behavior
                                   # ALLOW_TO_FAILOVER or DISALLOW_TO_FAILOVER
backend_hostname1 = 'pg2.domain.net'
backend_port1 = 5432
backend_weight1 = 1
backend_data_directory1 = '/var/lib/pgsql/9.1/data'
backend_flag1 = 'ALLOW_TO_FAILOVER'

~ REPLICATION MODE ~

In order to use Replication Mode with PGPool-II you’ll need to configure all setting in this section and all section above the REPLICATION MODE section.

For a basic setup you just need to enable replication by setting replication_mode = on. By default this setting is set to off

~ LOAD BALANCING MODE ~

When using Replication Mode with PGPool-II you have the option to enable load balancing by setting the load_balance_mode = on. By deafult this setting is set to off

Configure pcp.conf

PGPool-II has an interface for administration purpose to retrieve information on database nodes, shutdown PGPool-II, etc. via network. To use PCP commands, user authentication is required. This authentication is different from PostgreSQL’s user authentication. A username and password need to be defined in pcp.conf file. In the file, a username and password are listed as a pair on each line, and they are separated by a colon (:). Passwords are encrypted in md5 hash format.

1
postgres:e8a48653851e28c69d0506508fb27fc5

Copy the /etc/pgpool-II-91/pcp.conf.sample to /etc/pgpool-II-91/pcp.conf

1
cp /etc/pgpool-II-91/pcp.conf.sample /etc/pgpool-II-91/pcp.conf

To encrypt your password into md5 hash format, use pg_md5 command, which is installed as a part of pgpool-II executables. pg_md5 takes text as an command line argument, and displays its md5-hashed text.

For example, give “postgres” as the command line argument, at pg_md5 displays md5-hashed text to the standard output.

1
2
[smbambling@pg1 pgpool-II-91]$ /usr/bin/pg_md5 postgres
e8a48653851e28c69d0506508fb27fc5

PCP commands are executed via network, so the port number must be configured with pcp_port parameter in pgpool.conf file.

1
pcp_port 9898

Configure Postgres Client Access

By default Postgres only allows local users/connections. You need to grant access to the PGPool-II server in pg_hba.conf.

Reminder that PGPool-II doesn’t support replication over IPv6

Grant access to the PGPool-II server ( Even if postgres is installed on the same box). In the exapmle below 10.1.22.28/25 is the IP/Netmask fo the PGPool-II server and we are setting the method to trust

1
host    all             all             10.1.22.28/25            trust

Install/Configure Postgresql on Centos 6

| Comments

Configure/Add Postgres Repo

  1. Download the Centos 6 Repo RPM from HERE
1
wget http://yum.pgrpms.org/9.1/redhat/rhel-6-x86_64/pgdg-redhat91-9.1-5.noarch.rpm
  1. Install the Repo RPM
1
sudo rpm -Uvh pgdg-redhat91-9.1-5.noarch.rpm

Install Postgresql Server

Now that you have the repo install you can use YUM to install Postgresql from the PG repo

1
sudo yum install postgresql91-server

Initialize & Start Postgresql Server

You’ll first need to initalize the database for postgresql. If you attempt to start the service before initializing that database you’ll get an error like this….

1
2
3
smbambling@pg1 ~]$ sudo /etc/init.d/postgresql-9.1 start
/var/lib/pgsql/9.1/data is missing. Use "service postgresql-9.1.3 initdb" to initialize the cluster first.
                                            [FAILED]

To initialize the database you can call initdb the init script

1
2
[smbambling@pg1 ~]$ sudo /etc/init.d/postgresql-9.1 initdb
Initializing database:                                     [  OK  ]

Once the initialization is sucessfull you’ll configuration database files created in /var/lib/pgsql/9.1/data

1
2
3
[root@pg1 data]# ls
base    pg_clog      pg_ident.conf  pg_multixact  pg_serial    pg_subtrans  pg_twophase  pg_xlog          postmaster.opts
global  pg_hba.conf  pg_log         pg_notify     pg_stat_tmp  pg_tblspc    PG_VERSION   postgresql.conf  postmaster.pid

Now you will be able to successfully start the database with the init script

1
2
[smbambling@pg1 ~]$ sudo /etc/init.d/postgresql-9.1 start
Starting postgresql-9.1 service:                           [  OK  ]

To veryify that the service is running you can by issing status from the init script

1
2
[smbambling@pg1 ~]$ sudo /etc/init.d/postgresql-9.1 status
 (pid  22444) is running...

Configure Postgresql Access Permissions

Set the authentication method

When you called the initdb command above from RedHat’s init script it configured permissions on the database. These configuration settings are pg_hba.conf

RedHat calls the initdb like this:

1
initdb --pgdata='$PGDATA' --auth='ident sameuser'

This uses the not so popular ident scheme to determine if a user is allow to connect to the database.

ident: An authentication schema that relies on the currently logged in user. If you’ve su -s to postgres and then try to login as another user, ident will fail (as it’s not the currently logged in user).

This can be a sore spot if your not aware how it was configured and will give an error if trying to create a database with a user that is not currently logged into the system.

1
createdb: could not connect to database postgres: FATAL:  Ident authentication failed for user "myUser"

To get around this issue you can modify the pg_hba.conf file to to move from the ident scheme to the md5 scheme

1
2
3
4
# IPv4 local connections:
host    all             all             127.0.0.1/32            ident
# IPv6 local connections:
host    all             all             ::1/128                 ident
1
2
3
4
# IPv4 local connections:
host    all             all             127.0.0.1/32            md5
# IPv6 local connections:
host    all             all             ::1/128                 md5

Create “Super User”

By default only the postgres user on the system can create databases and manage the server. This user is granted superuser privilegdges to the postgres database(s) and server.

To work around this we will create an additional userser with superuser priviledges for management.

Its advised NOT to grant an application user superuser priviledges for security

To create users in postgres you can use the CREATE ROLE and priviledges can be modified with ALTER ROLE. To assist with user creation postgres provides a wrapper script createuser

Only superusers and users with CREATEROLE privilege can create new users, so creating the initial user must be done from the postgres account.

  1. Become the postgres user
  2. Invoke the createuser script with -p option. -p will issue a prompt for the password of the new user
    • Enter a username for the new user. We are entering root here.
    • When prompted grant the user superuser priviledges
1
2
3
4
5
6
7
[smbambling@pg1 ~]$ sudo su - postgres
-bash-4.1$ createuser -P
Enter name of role to add: root
Enter password for new role: 
Enter it again: 
Shall the new role be a superuser? (y/n) y
-bash-4.1$ exit

Disable Spotlight in OS X

| Comments

Overview

In OS X Lion you will see the MDS worker taking a large amount of CPU while indexing your volumes. This is extremely annoying and taking on your Time Machine volumes. Below are steps and notes on disabling indexing on Volumes and disabling Spotlight all together. I’ve also included how to remove the Spotlight spy glass from the menu bar for those who want to remove the unused icon.

Disable/Enable Spotlight

Disable Spotlight - This will require the administrative password

1
sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.metadata.mds.plist

Re-Enable Spotlight

1
sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.metadata.mds.plist

Alternate Approach Another approach you can try is to disable indexing on your Volumes.

1
sudo mdutil -a -i on

One thing to note is that it will not disable indexing on the Time Machine Backups.backupdb. Each time that time machine runs the MDS worker will index this drive.

1
2
3
4
5
6
7
8
9
10
11
12
3po:~ smbambling$ sudo mdutil -a -i off /:
 Indexing disabled.
/.MobileBackups:
 Indexing enabled. 
/Volumes/MobileBackups:
 Index is read-only.
/Volumes/MobileBackups/Backups.backupdb:
 Index is read-only.
/Volumes/TM:
 Indexing and searching disabled.
/Volumes/TM/Backups.backupdb:
 Indexing enabled.

Hide the Spotlight Menu Icon

For those who want to clean up their menubar you can remove the Spotlight icons.

This does not disable Spotlight or mds, it only hides the icon from the menubar.

1
sudo chmod 600 /System/Library/CoreServices/Search.bundle/Contents/MacOS/Search

You also need to kill the SystemUIServer process to refresh the menubar and have the change take effect

1
killall SystemUIServer

What you’ll find is the Spotlight menu is removed, but the search abilities built into Finder still works, as long as you have not disabled Spotlight from the above commands.

References

http://osxdaily.com/2011/12/10/disable-or-enable-spotlight-in-mac-os-x-lion/

http://osxdaily.com/2011/12/12/hide-spotlight-menu-icon-mac-os-x-lion/

Puppet Trouble Shooting Tips

| Comments

Overview

Below are some tips and tricks for trouble shooting puppet.

Manually Running Puppet Agent

You can manually execute puppet on a remote node and see the output. This needs to be run from the puppet.arin.net server

1
sudo puppet agent --test

Puppet Agent takes a long time to complete

This has happened a few times where executing the puppet agent will take a long period to complete ( The average is 5-10 sec ). To determine what is taking a long time with applying the catalog you can use –summarize to a summary of change and times to complete.

1
sudo puppet agent --test --summarize
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
notice: Finished catalog run in 3.57 seconds
Changes:
 Total: 2
Events:
 Total: 2
 Success: 2
Resources:
 Total: 114
 Out of sync: 2
 Changed: 2
 Skipped: 6
Time:
 Filebucket: 0.00
 Package: 0.00
 Yumrepo: 0.00
 Exec: 0.01
 Group: 0.01
 Ssh authorized key: 0.03
 User: 0.03
 File: 0.23
 Last run: 1334583207
 Service: 2.34
 Config retrieval: 2.90
 Total: 5.56
Version:
 Config: 1334581806
 Puppet: 2.7.12

You can also see the evaluation commands and changes being executed by adding the –evaltrace switch

1
sudo puppet agent --test --summarize --evaltrace

Modify Default OctoPress List Outdenting

| Comments

By default OctoPress outdents list items, both OL and UL. To correct this you can simply add the following to the sass/custom/_styles.scss

1
2
3
4
5
article {
  ol, ul {
    padding-left: 3em;
  }
}

Sun Java JDK Build With JPackage Modified

| Comments

Overview

The original idea was to just build from the RPM package provided by Sun/Oracle. Although we were able to build the RPM from SUN from the rpm.bin file we were unable to RPM sign the jdk rpm build. After some digging a found an article HERE that gave some further insite on the issue. It turns out that Sun is building their RPMs with a VERY old version of RPM 3.0.6. So after some additional searching on the web I stumbled across this wiki article, that explains building the lastest Sun Java 1.6u31 with a modified JPackage process.

To Check which version of RPM was used to sign rpm;

1
2
rpm -q --qf '%{RPMVERSION}' -p jdk-1.6.0_18-fcs.x86_64.rpm
3.0.6

Building Sun Java 1.6 u31

Download the JDK 1.6 u31 from Oracle

  1. Go to http://www.oracle.com/technetwork/java/javase/downloads/index.html
  2. Click on the Download button for the Java SE 6 Update 31 JDK
  3. Select the Accept License Agreement radio button
  4. Download the jdk-6u31-linux-x64.bin for 64bit or jdk-6u31-linux-i586.bin for 32bit file

Download the Timezone Updater from Oracle

  1. Go to http://www.oracle.com/technetwork/java/javase/downloads/index.html
  2. Click on the Download button for JDK DST Timezone Update Tool - 1.3.45
  3. Select the Accept License Agreement radio button
  4. Download the tzupdater-1_3_45-2011n.zip file

Create RPM Build Environment (If needed)

More to come on this…

Move/Copy Updated/JDK into SOURCES

If you building on a remote machine you’ll need to transfer the files to that build server via SCP or some other preferred method

1
2
mv tzupdater-1_3_45-2011n.zip ~/rpms/SOURCES/
mv jdk-6u31-linux-x64.bin  ~/rpms/SOURCES/

Download java-1.6.0-sun-1.6.0.31-1.0.cf.nosrc.rpm

This is being performed on your build server. We will download directly to the SRPM build directory to keep things clean

1
wget wget http://mirror.city-fan.org/ftp/contrib/java/java-1.6.0-sun-1.6.0.31-1.0.cf.nosrc.rpm ~/rpms/SRPMS/

Build Java RPMs

1
rpmbuild --rebuild ~/rpms/SRPMS/java-1.6.0-sun-1.6.0.31-1.0.cf.nosrc.rpm

After completion you should have the following RPMs in ~/rpms/RPMS/x86_64

  • java-1.6.0-sun-1.6.0.31-1.0.cf.x86_64.rpm
  • java-1.6.0-sun-demo-1.6.0.31-1.0.cf.x86_64.rpm
  • java-1.6.0-sun-devel-1.6.0.31-1.0.cf.x86_64.rpm
  • java-1.6.0-sun-jdbc-1.6.0.31-1.0.cf.x86_64.rpm
  • java-1.6.0-sun-plugin-1.6.0.31-1.0.cf.x86_64.rpm
  • java-1.6.0-sun-src-1.6.0.31-1.0.cf.x86_64.rpm

Installing Java JDK (Development Environment)

1
yum install java-1.6.0-sun-devel

Check Java Version

1
2
3
4
[host1]~]$ java -version
java version "1.6.0_31"
Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)