Reporting

How to monitor and report on pending rpm updates for CentOS syystems via Splunk?

vinceskahan
Path Finder

I'm running a bunch of CentOS systems and want to periodically report and analyze which rpms will be updated the next time we try to catch up to current. Curious what the best approach might be.

The basic command to run and the resulting output looks like this:

yum check-update --disablerepo=* --enablerepo=base --enablerepo=updates  | egrep "base$|updates$"

SDL.x86_64                      1.2.14-7.el6_7.1            updates
coreutils.x86_64                8.4-37.el6_7.3              updates
coreutils-libs.x86_64           8.4-37.el6_7.3              updates
cronie.x86_64                   1.4.4-15.el6_7.1            updates

Kind of thing I'm trying to do is automate coming up with things like:

  • 'which systems need updates to the "coreutils" rpm'
  • 'which systems have "any" updates at all that need applying'
  • 'which systems have "no" updates to apply'

Now I know Splunk is a pretty heavyweight way to get at that kind of info (heck, a bunch of flat files and a little perl/python or awk/sed/grep magic would be enough) but I thought it would be a good test case since we have Splunk forwarders on all the systems in question.

Ideas/suggestions ?

0 Karma

DMohn
Motivator

You need to follow several steps to achive this, but alltogether it's pretty straight forward.

  1. Create a scripted input on the systems in question, to forward the command output to your splunk indexer. Best practice: create a custom index and sourcetype for these data entries
  2. Extract the fields from your data. If I understand the data correctly, it's package name, version and repo in the results, so basically all you need for your report. Hostname sould be added automatically by the forwarders.
  3. Create your search ... It could be something like this: index=yum_updates package=coreutils* | stats values(host) as coreutils_update for the first report index=yum_updates package=* | stats values(host) as any_update for the second report index=yum_updates package!=* | stats values(host) as no_updates for the third one

If you need further help with steps 1 or 2, feel free to comment!

0 Karma

vinceskahan
Path Finder

We have what I'd call a proof-of-concept server basically inherited from somebody who left. One splunk server doing all the lifting. A few dozen client systems (Linux now, will add Windows) running universal forwarders for basic logfile kinds of things (syslogs, app-specific logs, etc.). License is only for a few GB/day so we'll be very very small for the forseeable future.

What I inherited just tosses stuff in main. No custom sourcetypes. Really really quickie setup in my opinion other than maybe a dashboard with stuff nobody knows anything about. Basically nobody here knows a thing really about how to get much value from splunk or how architecturally to best set it up.....so I'm trying to go very small/minimal/simple/slowly to try to figure this beast out.

I 'did' figure out I needed to touch the server to create the custom index, and data got there from the trivial app I built for one forwarder. Next need to figure out how to do the custom sourcetype and extract data into the fields per your original reply items (1) and (2) and then try your example reports. Nibbling away at it.

Definitely appreciate your help and time......

0 Karma

vinceskahan
Path Finder

ok - got a scripted input working (cool) with a cron-like period specified. Where I'm lost now is how do I create a custom index from the 'forwarder' side only without needing to touch the splunk server it forwards to.

Basically if I define a index=whatever line in inputs.conf, it seems to run but the custom index doesn't get automagically created on the splunk server, and there are no errors in the forwarder VM at all saying anything at all.

geez this stuff is dense and the stackoverflow-like answers interface is horribly inefficient to navigate through. I 'think' that I'm trying to do something very basic to the product, but there are no gentle getting-started howtos that i can find to even know what obscure terminology to use to ask the question. Very frustrating product to try to spin up on.

Anyway, appreciate any help of course !

0 Karma

joesrepsol
Path Finder

care to share what you did for the scripted inputs part??? I'm trying to do this same thing now and would appreciate any knowledge.

0 Karma

DMohn
Motivator

Maybe it would be helpful to get some more informaiton about the topology of your environment. Are you using indexer clustering? How do you deploy your indexer configuration? Without some modifications on the indexer side you won't be able to configure a new index anyway.

So, first of all you need to create a new index configuration for your yum index. This will be done on the indexer(s), where your data gets forwarded to. Which index.conf you should use for that depends on the current configuration. If you have a indexer master, you should use the master configuration and push the new index config to the cluster as a deployment bundle.
For the basic layout of the index.conf file check here => http://docs.splunk.com/Documentation/Splunk/6.3.1/Admin/indexesconf

After this is done, you can check if the data gets pushed to the correct index, and if it's searchable.

0 Karma

vinceskahan
Path Finder

Topology is what looks to me like a proof-of-concept server that we inherited from somebody who left before I got here - one splunk VM on AWS doing all the lifting, a handful of Linux VM universal forwarders monitoring the usual syslogs and a few app-specific logfiles. Very small license, just a few GB/day. No custom sourcetypes. Everything getting tossed into main index. Couple reports/alerts/dashboards. Basically what to me looks like a throwaway 'lets fiddle for a week' demo setup.

Nobody here has any splunk experience at all, so I'm trying to corral this thing and go as small/simple/slowly as possible to try to figure out some reasonable best-practices to spin up a real environment as well as provide some guidance re: how to get some value from splunk here.

I 'did' figure out yesterday that I needed touch labor on the server to create the custom index, and saw the forwarded data show up ok afterward, which was good. Did a trivial app on the forwarder to generate the date ala cron, so learned a lot (albeit too slowly/frustratingly).

Next up I guess is creating the custom sourcetype (back part of (1) in your original response) and extracting the data into named fields (2) then trying your reports. Tough going since the splunk docs assume you know far more about their product than somebody getting their feet wet really knows.

I'll keep banging at it...appreciate the help, so thanks !

0 Karma

DMohn
Motivator
0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...