Getting Data In

Can the Universal Forwarder process core-dumps?

unitedmarsupial
Path Finder

Sometimes our application dumps core (duh!), and we'd like the output of gdb -ex "bt full" -ex quit corefile to be forwarded to the Splunk-server, when this happens.

Can the Forwarder do this -- instead of trying to parse a file, invoke a command and forward its output -- or must we write our own forwarder?

Labels (1)
0 Karma

unitedmarsupial
Path Finder

Thanks! Would it make sense to turn the text-blob output by the gdb into JSON? Will UF then notice, that it is JSON -- or do mark the "input" as such somehow?

So that, for example, each function listed on the stack becomes its own member of the "stack" array?

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Yes, it makes sense to convert the command output into something Splunk can digest easily.  Whether that's JSON or something (anything other than XML) else depends on the data and what you need to do with it.

Keep in mind the UF has limited abilities so any transformations of the data have to be done with shell commands at the UF level or in the first indexer/heavy forwarder that processes it.

---
If this reply helps you, Karma would be appreciated.
0 Karma

richgalloway
SplunkTrust
SplunkTrust

Create a scripted input for the UF to run.  The script should contain the commands you want to run.  The UF will run the script and Splunk will index anything written to stdout.

---
If this reply helps you, Karma would be appreciated.

unitedmarsupial
Path Finder

Could I trouble you for an example of such an input? The only documentation for "scripted input", that I see, uses the script to poll a database -- rather than to process a file matching the specified pattern...

Other examples all invoke the specified script periodically -- which means, it would have to process all of the discovered core-dumps in my case. I'd prefer "atomic" operation -- with core-dump detection being done by the UF itself, and the script invoked once for each detected file...

Is that possible?

Thank you!

0 Karma

richgalloway
SplunkTrust
SplunkTrust

That example is more complex than you need.  For this purpose, the only files required are starter_script.sh (which you can rename to anything else) and inputs.conf.

The starter_script.sh file will contain your shell commands to process the core dumps.  Everything written to stdout by the script will be indexed by Splunk.

---
If this reply helps you, Karma would be appreciated.
0 Karma

unitedmarsupial
Path Finder

But this starter_script.sh will still be invoked periodically -- and expected to process all discovered core-dumps in one invocation, rather than once for each core, will it not?

Which means, I'll have to keep track of which cores have been processed already -- something, I'd rather leave to the UF itself...

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Whether the script processes one dump or all of them is a personal decision.  If it was me, I do them all in one go.

The UF has no way to know what the script did or didn't do so it's up to the script itself to keep track.  Perhaps the dump files can be renamed or moved to another directory.

---
If this reply helps you, Karma would be appreciated.
0 Karma

unitedmarsupial
Path Finder

But it is not a "personal decision". The "scripted input" means, UF will not be monitoring the files by itself -- it will rely on the periodically spawn script to look for them, and process anything discovered. Or am I mistaken?

And, if it is invoked periodically -- rather than once for each core -- then the script must process all cores discovered, in one go... And keep track.

Yes, I know, how to keep track of files, but UF already has this tracking implemented -- for the ordinary logs (Splunk's bread and butter), it keeps track not only of each file, but of the position within each file. I'd much rather rely on that mechanism, than reimplement it...

The scripted input solution will use the custom script to both, detect core-dumps and process them. I'd much prefer to have to implement only the latter part -- the processing, not the detection...

So, can something like the below be added as an input:

 

[monitor:///my/application/directory/core.*]
disabled=0
source_type=coredump
process_with=/my/scripts/core2json

 

It is the last line of the hypothetical input, that I'm inquiring about... Can I ask UF to invoke a program instead of attempting to parse the file on its own?

The UF will not need to know, what the script did. It just need to keep track of for which files the script has been invoked already -- and whether the invocation has been successful (exited with code 0).

(For some reason, I cannot even find the full reference of the inputs.conf syntax 😞 Only examples -- but not the full list of available verbs...)

0 Karma

richgalloway
SplunkTrust
SplunkTrust

It's a "personal" decision in that it is not one we can make for you.

You're correct in how UFs process scripted inputs.  There are settings to run scripted input only once (at startup), but that doesn't help here.

The script can process a single file with each invocation, but still will have to keep track of which files have been processed.

The UF's tracking ability applies only to monitored files and directories.  It does not apply to scripts and there is no API one can use.

Having an attribute specify a script to run to process a file is an interesting idea.  Go to https://ideas.splunk.com to make a case for it.

The syntax for inputs.conf is in $SPLUNK_HOME/etc/system/README/inputs.conf.spec.

---
If this reply helps you, Karma would be appreciated.

unitedmarsupial
Path Finder

Sigh... Yeah, this was an "interesting idea" four years ago...

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...