<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Monitoring Postgres Table in All Apps and Add-ons</title>
    <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Monitoring-Postgres-Table/m-p/70441#M4377</link>
    <description>&lt;P&gt;If you just want to dump the contents of that table every XXX minutes, it should be very easy to do. &lt;/P&gt;

&lt;P&gt;Just write a shell script or batch file that runs the command-line postgres client and dumps the table(s) you want, and have Splunk index the output. Basically, any query you can run at the command line would do.&lt;/P&gt;

&lt;P&gt;Take a look at the documentation on scripted inputs - that should help get you started.&lt;/P&gt;

&lt;P&gt;&lt;A href="http://www.splunk.com/base/Documentation/4.1.5/Admin/Setupcustom(scripted)inputs" rel="nofollow"&gt;http://www.splunk.com/base/Documentation/4.1.5/Admin/Setupcustom(scripted)inputs&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;&lt;/P&gt;&lt;HR /&gt;&lt;P&gt;&lt;/P&gt;

&lt;P&gt;If the table you want to monitor is continually growing (i.e., you're continually logging stats over time), then your problem is the same as for any other application that logs to a database.&lt;/P&gt;

&lt;P&gt;You may wish to consider having whatever populates the stats table log directly to Splunk, if that's feasible. Assuming it isn't, then you need to do a little more work scripting, and you should consider using Python instead of dealing with shell-scripts and psql:&lt;/P&gt;

&lt;P&gt;It depends how your table is structured, but here's a common approach if your table has an an increasing primary key value or timestamp:&lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt;Keep track of the last-seen record ID in a file
&lt;/LI&gt;&lt;LI&gt;Build your SQL query to retrieve all records with ID values higher than the last one you saw
&lt;/LI&gt;&lt;LI&gt;Dump the results of the table to stdout
&lt;/LI&gt;&lt;LI&gt;Update the file with the highest ID value retrieved by your query
&lt;/LI&gt;&lt;/UL&gt;</description>
    <pubDate>Thu, 14 Oct 2010 20:27:07 GMT</pubDate>
    <dc:creator>southeringtonp</dc:creator>
    <dc:date>2010-10-14T20:27:07Z</dc:date>
    <item>
      <title>Monitoring Postgres Table</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Monitoring-Postgres-Table/m-p/70440#M4376</link>
      <description>&lt;P&gt;We have a "stats" table on a postgres server, does anyone know how to get splunk to monitor this? I suspect it involves a script... someone must have already done something like this?&lt;/P&gt;</description>
      <pubDate>Thu, 14 Oct 2010 18:14:12 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Monitoring-Postgres-Table/m-p/70440#M4376</guid>
      <dc:creator>autovhcdev</dc:creator>
      <dc:date>2010-10-14T18:14:12Z</dc:date>
    </item>
    <item>
      <title>Re: Monitoring Postgres Table</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Monitoring-Postgres-Table/m-p/70441#M4377</link>
      <description>&lt;P&gt;If you just want to dump the contents of that table every XXX minutes, it should be very easy to do. &lt;/P&gt;

&lt;P&gt;Just write a shell script or batch file that runs the command-line postgres client and dumps the table(s) you want, and have Splunk index the output. Basically, any query you can run at the command line would do.&lt;/P&gt;

&lt;P&gt;Take a look at the documentation on scripted inputs - that should help get you started.&lt;/P&gt;

&lt;P&gt;&lt;A href="http://www.splunk.com/base/Documentation/4.1.5/Admin/Setupcustom(scripted)inputs" rel="nofollow"&gt;http://www.splunk.com/base/Documentation/4.1.5/Admin/Setupcustom(scripted)inputs&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;&lt;/P&gt;&lt;HR /&gt;&lt;P&gt;&lt;/P&gt;

&lt;P&gt;If the table you want to monitor is continually growing (i.e., you're continually logging stats over time), then your problem is the same as for any other application that logs to a database.&lt;/P&gt;

&lt;P&gt;You may wish to consider having whatever populates the stats table log directly to Splunk, if that's feasible. Assuming it isn't, then you need to do a little more work scripting, and you should consider using Python instead of dealing with shell-scripts and psql:&lt;/P&gt;

&lt;P&gt;It depends how your table is structured, but here's a common approach if your table has an an increasing primary key value or timestamp:&lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt;Keep track of the last-seen record ID in a file
&lt;/LI&gt;&lt;LI&gt;Build your SQL query to retrieve all records with ID values higher than the last one you saw
&lt;/LI&gt;&lt;LI&gt;Dump the results of the table to stdout
&lt;/LI&gt;&lt;LI&gt;Update the file with the highest ID value retrieved by your query
&lt;/LI&gt;&lt;/UL&gt;</description>
      <pubDate>Thu, 14 Oct 2010 20:27:07 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Monitoring-Postgres-Table/m-p/70441#M4377</guid>
      <dc:creator>southeringtonp</dc:creator>
      <dc:date>2010-10-14T20:27:07Z</dc:date>
    </item>
    <item>
      <title>Re: Monitoring Postgres Table</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Monitoring-Postgres-Table/m-p/70442#M4378</link>
      <description>&lt;P&gt;the database is way too large to dump out, is there a way to index directly?&lt;/P&gt;</description>
      <pubDate>Thu, 14 Oct 2010 21:04:49 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Monitoring-Postgres-Table/m-p/70442#M4378</guid>
      <dc:creator>autovhcdev</dc:creator>
      <dc:date>2010-10-14T21:04:49Z</dc:date>
    </item>
    <item>
      <title>Re: Monitoring Postgres Table</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Monitoring-Postgres-Table/m-p/70443#M4379</link>
      <description>&lt;P&gt;What do you mean by "directly"?  Splunk only indexed textual data, so at some point the records have to be converted into a text-based format that splunk can index.  There is really no concept of "adapters" or other such product-specific things, if that's what your thinking.&lt;/P&gt;</description>
      <pubDate>Fri, 15 Oct 2010 02:51:02 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Monitoring-Postgres-Table/m-p/70443#M4379</guid>
      <dc:creator>Lowell</dc:creator>
      <dc:date>2010-10-15T02:51:02Z</dc:date>
    </item>
    <item>
      <title>Re: Monitoring Postgres Table</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Monitoring-Postgres-Table/m-p/70444#M4380</link>
      <description>&lt;P&gt;To be clear, the original suggestion was not advocating dumping the entire database, just the results of a single query. It depends on how your data is structured though - if you're continually adding new records, then it's more like a traditional log table than just a list of stats. See edits above for more information.&lt;/P&gt;</description>
      <pubDate>Fri, 15 Oct 2010 02:58:12 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Monitoring-Postgres-Table/m-p/70444#M4380</guid>
      <dc:creator>southeringtonp</dc:creator>
      <dc:date>2010-10-15T02:58:12Z</dc:date>
    </item>
  </channel>
</rss>

