<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Ocf on BAFM</title><link>https://christian.blog.pakiheim.de/tags/ocf/</link><description>Recent content in Ocf on BAFM</description><generator>Hugo -- 0.160.1</generator><language>en</language><lastBuildDate>Fri, 08 Aug 2014 09:44:38 +0000</lastBuildDate><atom:link href="https://christian.blog.pakiheim.de/tags/ocf/index.xml" rel="self" type="application/rss+xml"/><item><title>Linux-HA and Tivoli Storage Manager</title><link>https://christian.blog.pakiheim.de/posts/2014-08-08_linux-ha-and-tivoli-storage-manager/</link><pubDate>Fri, 08 Aug 2014 09:44:38 +0000</pubDate><guid isPermaLink="false">http://blog.barfoo.org/?p=983</guid><description>&lt;p&gt;Well, since we received part of our shipment on Wednesday, I finally looked at how we&amp;rsquo;re gonna deploy our active/active Tivoli Storage Manager configuration. Right now, we do have a single pSeries box hosting ~100 client nodes which we&amp;rsquo;re looking to split by two (since we do have two x366 for that purpose now).&lt;/p&gt;
&lt;p&gt;Now, as there ain&amp;rsquo;t no solution for this scenario yet (neither from International Business Machines nor someone out of the open source community), I sat down and started writing an OCF Resource agent for dsmserv (that is the Tivoli Storage Manager server).&lt;/p&gt;</description></item><item><title>Linux-HA and Tivoli Storage Manager (Finito!)</title><link>https://christian.blog.pakiheim.de/posts/2014-08-08_linux-ha-and-tivoli-storage-manager-finito/</link><pubDate>Fri, 08 Aug 2014 08:59:07 +0000</pubDate><guid isPermaLink="false">http://blog.barfoo.org/?p=1047</guid><description>&lt;p&gt;As I previously said, I was writing &lt;a href="http://christian.weblog.heimdaheim.de/2008/09/26/linux-ha-and-tivoli-storage-manager/" title="Linux-HA and Tivoli Storage Manager"&gt;my own OCF resource agent&lt;/a&gt; for IBM&amp;rsquo;s Tivoli Storage Manager Server. And I just finished it yesterday evening (it took me about two hours to write this post).&lt;/p&gt;
&lt;p&gt;Only took me about four work days (that is roughly four hours each, which weren&amp;rsquo;t recorded in that subversion repository) plus most of this week at home (which is 10 hours a day) and about one hundred subversion revisions. The good part about it is, that it actually just works :-D (I was amazed on how good actually). Now you&amp;rsquo;re gonna say, &amp;ldquo;but Christian, why didn&amp;rsquo;t you use the included Init-Script and just fix it up, so it is actually compilant to the LSB Standard ?&amp;rdquo;&lt;/p&gt;
&lt;p&gt;The answer is rather simple: Yeah I could have done that, but you also know that wouldn&amp;rsquo;t have been fun. Life is all about learning, and learn something I did (even if I hit the head against the wall from time to time ;-) during those few days) &amp;hellip; There&amp;rsquo;s still one or two things I might want to add/change in the future (that is maybe next week), like&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;adding support for monitor depth by querying the dsmserv instance via dsmadmc (if you read through the resource agent, I already use it for the shutdown/pre-shutdown stuff)&lt;/li&gt;
&lt;li&gt;I still have to properly test it (like Alan Robertson mentioned in his &lt;a href="http://lca2007.linux.org.au/talk/29.html"&gt;one hour thirty talk on Linux-HA 2.0&lt;/a&gt; and &lt;a href="http://www.slideshare.net/opensource_training/heartbeat"&gt;on his slides&lt;/a&gt;, Page 100-102) in a pre-production environment&lt;/li&gt;
&lt;li&gt;I&amp;rsquo;m probably configure the IBM RSA to act as a stonith device ( &lt;strong&gt;s&lt;/strong&gt; hoot &lt;strong&gt;t&lt;/strong&gt; he &lt;strong&gt;o&lt;/strong&gt; ther &lt;strong&gt;n&lt;/strong&gt; ode &lt;strong&gt;i&lt;/strong&gt; n &lt;strong&gt;t&lt;/strong&gt; he &lt;strong&gt;h&lt;/strong&gt; ead) - just for the case one of them ever gets stuck in a case, where the box is still up, but doesn&amp;rsquo;t react to any requests anymore&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Setting up Linux-HA</title><link>https://christian.blog.pakiheim.de/posts/2008-10-01_setting-up-linux-ha/</link><pubDate>Wed, 01 Oct 2008 08:17:09 +0000</pubDate><guid isPermaLink="false">http://blog.barfoo.org/?p=1004</guid><description>&lt;p&gt;Well, initially I thought writing the &lt;a href="http://christian.weblog.heimdaheim.de/2008/09/26/linux-ha-and-tivoli-storage-manager/" title="Linux-HA and Tivoli Storage Manager"&gt;OCF resource agent for Tivoli Storage Manager&lt;/a&gt; was the hard part. But as it turns out, it really ain&amp;rsquo;t. The hard part, is getting the resources into the heartbeat agent (or whatever you wanna call it). The worst part about it, is that the hb_gui is completely worthless if you want to do a configuration without quorum.&lt;/p&gt;
&lt;p&gt;First of all, we need to setup the main Linux-HA configuration file ( &lt;em&gt;/etc/ha.d/ha.cf&lt;/em&gt;). Configuring that, is rather simple. For me, as I do have two network devices, over which both nodes see each other (one is an adapter bond of comprising of two simple, plain, old 1G copper ports; the other is the 1G fibre cluster port), the configuration looks like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt" id="hl-0-1"&gt;&lt;a class="lnlinks" href="#hl-0-1"&gt; 1&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-2"&gt;&lt;a class="lnlinks" href="#hl-0-2"&gt; 2&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-3"&gt;&lt;a class="lnlinks" href="#hl-0-3"&gt; 3&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-4"&gt;&lt;a class="lnlinks" href="#hl-0-4"&gt; 4&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-5"&gt;&lt;a class="lnlinks" href="#hl-0-5"&gt; 5&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-6"&gt;&lt;a class="lnlinks" href="#hl-0-6"&gt; 6&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-7"&gt;&lt;a class="lnlinks" href="#hl-0-7"&gt; 7&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-8"&gt;&lt;a class="lnlinks" href="#hl-0-8"&gt; 8&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-9"&gt;&lt;a class="lnlinks" href="#hl-0-9"&gt; 9&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-10"&gt;&lt;a class="lnlinks" href="#hl-0-10"&gt;10&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-11"&gt;&lt;a class="lnlinks" href="#hl-0-11"&gt;11&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-12"&gt;&lt;a class="lnlinks" href="#hl-0-12"&gt;12&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-13"&gt;&lt;a class="lnlinks" href="#hl-0-13"&gt;13&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-14"&gt;&lt;a class="lnlinks" href="#hl-0-14"&gt;14&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-15"&gt;&lt;a class="lnlinks" href="#hl-0-15"&gt;15&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-16"&gt;&lt;a class="lnlinks" href="#hl-0-16"&gt;16&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-17"&gt;&lt;a class="lnlinks" href="#hl-0-17"&gt;17&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-18"&gt;&lt;a class="lnlinks" href="#hl-0-18"&gt;18&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-19"&gt;&lt;a class="lnlinks" href="#hl-0-19"&gt;19&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-20"&gt;&lt;a class="lnlinks" href="#hl-0-20"&gt;20&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-21"&gt;&lt;a class="lnlinks" href="#hl-0-21"&gt;21&lt;/a&gt;
&lt;/span&gt;&lt;span class="lnt" id="hl-0-22"&gt;&lt;a class="lnlinks" href="#hl-0-22"&gt;22&lt;/a&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;udpport 694
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;autojoin none
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;crm true
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;use_logd on
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;debug false
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;coredumps false
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;auto_failback on
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ucast bond0 10.0.0.10
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ucast bond0 10.0.0.20
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ucast eth2 10.0.0.29
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ucast eth2 10.0.0.30
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;node tsm1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;node tsm2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;respawn root /usr/lib64/heartbeat/pingd -m 100 -d 5s
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ping 10.0.0.1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;respawn root /sbin/evmsd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;apiauth evms uid=hacluster,root
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;After configuring the service itself is done, one just needs to start the heartbeat daemon on both nodes. Afterwards, we should be able to configure the cluster resources.&lt;/p&gt;
&lt;p&gt;I find it particularly easier to just update the corresponding sections with &lt;em&gt;cibadmin&lt;/em&gt; (the man-page really has some good examples). So here are my configuration files for two resource groups ( &lt;em&gt;crm_mon&lt;/em&gt; doesn&amp;rsquo;t difference between resources and grouped resources, it&amp;rsquo;ll just show you that you configured two resources).&lt;/p&gt;</description></item></channel></rss>