<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic How to monitoring a log in a cluster failover environemnt in Deployment Architecture</title>
    <link>https://community.splunk.com/t5/Deployment-Architecture/How-to-monitoring-a-log-in-a-cluster-failover-environemnt/m-p/25794#M623</link>
    <description>&lt;P&gt;Hi all,&lt;BR /&gt;
I'm studying how to apply splunk to my architecture. I've a simple question about log monitoring.&lt;BR /&gt;
I've a cluster failover with different oracle instances. &lt;BR /&gt;
Now for example INSTACE1 is running on nodeA and INSTANCE2 is running on nodeB. &lt;BR /&gt;
So I have on nodeA the alert log in path&lt;BR /&gt;
/opt/oracle/instance1/diag/rdbms/instance1/INSTANCE1/trace/alert_INSTANCE1.log&lt;/P&gt;

&lt;P&gt;Ok, with a forwarder I can setup the reading of log in a very simple way. But how I can define to monitor this log on both nodeA (when instance is running on nodeA) and on nodeB (when instance switches on nodeB)?&lt;BR /&gt;
Hence the same question may be applied to process monitoring, process ora_pmon_INSTANCE1 should run at least on one node...&lt;/P&gt;

&lt;P&gt;Any interesting idea?&lt;/P&gt;

&lt;P&gt;Thanks&lt;BR /&gt;
Ste&lt;/P&gt;</description>
    <pubDate>Mon, 28 Sep 2020 11:40:24 GMT</pubDate>
    <dc:creator>scislaghi</dc:creator>
    <dc:date>2020-09-28T11:40:24Z</dc:date>
    <item>
      <title>How to monitoring a log in a cluster failover environemnt</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/How-to-monitoring-a-log-in-a-cluster-failover-environemnt/m-p/25794#M623</link>
      <description>&lt;P&gt;Hi all,&lt;BR /&gt;
I'm studying how to apply splunk to my architecture. I've a simple question about log monitoring.&lt;BR /&gt;
I've a cluster failover with different oracle instances. &lt;BR /&gt;
Now for example INSTACE1 is running on nodeA and INSTANCE2 is running on nodeB. &lt;BR /&gt;
So I have on nodeA the alert log in path&lt;BR /&gt;
/opt/oracle/instance1/diag/rdbms/instance1/INSTANCE1/trace/alert_INSTANCE1.log&lt;/P&gt;

&lt;P&gt;Ok, with a forwarder I can setup the reading of log in a very simple way. But how I can define to monitor this log on both nodeA (when instance is running on nodeA) and on nodeB (when instance switches on nodeB)?&lt;BR /&gt;
Hence the same question may be applied to process monitoring, process ora_pmon_INSTANCE1 should run at least on one node...&lt;/P&gt;

&lt;P&gt;Any interesting idea?&lt;/P&gt;

&lt;P&gt;Thanks&lt;BR /&gt;
Ste&lt;/P&gt;</description>
      <pubDate>Mon, 28 Sep 2020 11:40:24 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/How-to-monitoring-a-log-in-a-cluster-failover-environemnt/m-p/25794#M623</guid>
      <dc:creator>scislaghi</dc:creator>
      <dc:date>2020-09-28T11:40:24Z</dc:date>
    </item>
    <item>
      <title>Re: How to monitoring a log in a cluster failover environemnt</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/How-to-monitoring-a-log-in-a-cluster-failover-environemnt/m-p/25795#M624</link>
      <description>&lt;P&gt;Could you not monitor both files all the time? &lt;/P&gt;

&lt;P&gt;If the paths are the same, you could have the same config on both/all forwarders (I'm assuming that you have more than one host).&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[monitor:///opt/oracle/instance*/diag/rdbms/instance*/INSTANCE*/trace/alert_INSTANCE*.log]
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Or am I missing something?&lt;/P&gt;

&lt;P&gt;/Kristian&lt;/P&gt;</description>
      <pubDate>Thu, 12 Apr 2012 18:31:36 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/How-to-monitoring-a-log-in-a-cluster-failover-environemnt/m-p/25795#M624</guid>
      <dc:creator>kristian_kolb</dc:creator>
      <dc:date>2012-04-12T18:31:36Z</dc:date>
    </item>
    <item>
      <title>Re: How to monitoring a log in a cluster failover environemnt</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/How-to-monitoring-a-log-in-a-cluster-failover-environemnt/m-p/25796#M625</link>
      <description>&lt;P&gt;can you clarify what happens to the file on failover? If you fail instance 1 over from nodeA to nodeB, does the new instance 1 that is running on nodeB start writing to a new file? Or does the old file get moved over from nodeA to nodeB, and then the new instance starts appending to the moved copy of the file?&lt;/P&gt;</description>
      <pubDate>Thu, 12 Apr 2012 23:06:06 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/How-to-monitoring-a-log-in-a-cluster-failover-environemnt/m-p/25796#M625</guid>
      <dc:creator>gkanapathy</dc:creator>
      <dc:date>2012-04-12T23:06:06Z</dc:date>
    </item>
    <item>
      <title>Re: How to monitoring a log in a cluster failover environemnt</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/How-to-monitoring-a-log-in-a-cluster-failover-environemnt/m-p/25797#M626</link>
      <description>&lt;P&gt;Log file is on ashared filesystem and in case of failover the filesystem is dismounted from nodeA and remounted on nodeB. You'll write on the same file in the same path on a different node.&lt;/P&gt;</description>
      <pubDate>Fri, 13 Apr 2012 05:21:00 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/How-to-monitoring-a-log-in-a-cluster-failover-environemnt/m-p/25797#M626</guid>
      <dc:creator>scislaghi</dc:creator>
      <dc:date>2012-04-13T05:21:00Z</dc:date>
    </item>
  </channel>
</rss>

