All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

this is awesome, but is there a way to make the results columns (additional fields on my results)
This is better option. You should remember that when you configure two outputs on splunk and when one of them stalls then also other stops quite soon.
If you want it, you can vote up my proposal on Splunk Ideas. https://ideas.splunk.com/ideas/EID-I-2441
This is a very large limitation.  Know that we would like to see more ways to customize Markdown content as well. Thanks!
There are no timestamps in the lookup table. When I plug one in, I get the desired results.   
Assuming your data is in the _raw field | eval parts=split(_raw, " ") | mvexpand parts | eval name=mvindex(split(parts,":"),0) | eval value=mvjoin(mvindex(split(parts,":"),1,2),":") | eval {name}=va... See more...
Assuming your data is in the _raw field | eval parts=split(_raw, " ") | mvexpand parts | eval name=mvindex(split(parts,":"),0) | eval value=mvjoin(mvindex(split(parts,":"),1,2),":") | eval {name}=value
https://docs.splunk.com/Documentation/CIM/6.0.0/User/Install There is still one outstanding index defined in the app as of the most recent install version.  You will need the index defined on the in... See more...
https://docs.splunk.com/Documentation/CIM/6.0.0/User/Install There is still one outstanding index defined in the app as of the most recent install version.  You will need the index defined on the indexes, you can do this full app or custom app to the indexers.
Do you mean? fieldA fieldB fieldC 1:10 1:3 1:2 1:10   1:2 1:10   1:2     1:1
fieldA:1:10 fieldB:1:3 fieldC:1:2 fieldA:1:10 fieldC:1:2 fieldA:1:10 fieldC:1:2 fieldC:1:1   I want to end up with a field called fieldA, fieldb, and fieldC where the field name is t... See more...
fieldA:1:10 fieldB:1:3 fieldC:1:2 fieldA:1:10 fieldC:1:2 fieldA:1:10 fieldC:1:2 fieldC:1:1   I want to end up with a field called fieldA, fieldb, and fieldC where the field name is the actual text found in the string as i cant predict which event will contain which combination
Yup, tried that, no joy. Same errors and the mongod.lock file just recreates itself.
Hi @corti77 , you have to configure your netapp to send syslogs using a protocolo (TCP or UDP) on a port. At the same time, on rsyslog (or syslog-ng) you have configure receiving followinf the inst... See more...
Hi @corti77 , you have to configure your netapp to send syslogs using a protocolo (TCP or UDP) on a port. At the same time, on rsyslog (or syslog-ng) you have configure receiving followinf the instructions at https://www.rsyslog.com/doc/index.html and writing logs in a file with a path (usually containing hostname) and a filename. then in the Forwarder, you have to add the add-one for Netapp ( https://splunkbase.splunk.com/app/3418 ) to the Forwarder adding an inputs.conf in local folder. In this inputs.conf, use the batch command that reads and deletes logs instead of monitor, the instructions about how to take logs, see at https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/Usingforwardingagents Ciao. Giuseppe
I like this idea but I've always inserted an extra step.  Run a query on the data in Splunk for the Source but then used the SourceType value to search the inputs.  Helps to avoid any issues with wil... See more...
I like this idea but I've always inserted an extra step.  Run a query on the data in Splunk for the Source but then used the SourceType value to search the inputs.  Helps to avoid any issues with wildcards or regex in the log path and filename. To each their own and whatever works is always the best solution.
Thanks for your reply @gcusello , my question was more on how to build the solution.  I found some information about configuring netapp https://storageexorcist.wordpress.com/2020/06/03/ontap-native... See more...
Thanks for your reply @gcusello , my question was more on how to build the solution.  I found some information about configuring netapp https://storageexorcist.wordpress.com/2020/06/03/ontap-native-nas-auditing-smb-and-nfs/ So maybe it is a matter of configuring it like that and sending those logs via syslog to splunk?
@PickleRick  Ok i see, and yes currentDBSizeMB does correspond to the actual size of the index on the disk.
There can be many things done with data within Splunk ingestion process so the general "how to find XXX with one search" question response should usually be appended with a big warning "if your case ... See more...
There can be many things done with data within Splunk ingestion process so the general "how to find XXX with one search" question response should usually be appended with a big warning "if your case is somewhat unusual, this might not work at all". I think I could even write props/transforms to rewrite each event's source to a random value (and route to a random index ). Try finding where such event came from unless you _know_ it.
Just to clear things up—I’m not the same person, just a friend who happens to share the same frustration. We’re both in the trenches, managing Splunk forwarders on Linux, and trust me, it’s a shared ... See more...
Just to clear things up—I’m not the same person, just a friend who happens to share the same frustration. We’re both in the trenches, managing Splunk forwarders on Linux, and trust me, it’s a shared experience we can all relate to. We're not asking for much, just some consistency and automation in how we manage our installs. A proper YUM repo would save us from the ongoing dance of manual installs and updates, and help us maintain the stability and security that every sysadmin craves. So no, I’m not the one who’s been posting under multiple accounts, but I am definitely right there with them, dealing with the same pain points. Let’s hope we can get some traction on this, and maybe one day, our dream of a YUM repo will be more than just a wish. Cheers, A Fellow Sysadmin Who Gets the Struggle
That's a valid point about HEC events being able to specify any source value, however, that would not be associated with an app and so would not apply to the OP's request. It's possible a transform ... See more...
That's a valid point about HEC events being able to specify any source value, however, that would not be associated with an app and so would not apply to the OP's request. It's possible a transform can be used to change the source name to something else.  Those can be found by searching the transforms.conf file on an indexer (or on the Cluster Manager).
This way you can only (if the regex matches) extract indexed field, not modify the original event (maybe except when you overwrite the _raw event). You're looking for the SEDCMD functionality. I'd a... See more...
This way you can only (if the regex matches) extract indexed field, not modify the original event (maybe except when you overwrite the _raw event). You're looking for the SEDCMD functionality. I'd also slightly modify your regex since you're looking for a base64-encoded contents which may not contain neither backslash nor a quote. SEDCMD-trim-file = s/(\\"file\\":\s*\\")([^\\"]{5000,}?)/\1long_file/g  See it here https://regex101.com/r/8nX7FY/1 (the regex101 substitution uses a bit different format to SEDCMD - it uses $1 instead of \1)
I just created a feature request on ideas.splunk.com: Provide a YUM repository for Splunk Feel free to vote it up
Maybe not you, but there were at least two separate freshly created accounts with just one comment in this thread in their history, created within some 10 minutes of each other and posting a very sim... See more...
Maybe not you, but there were at least two separate freshly created accounts with just one comment in this thread in their history, created within some 10 minutes of each other and posting a very similarily built comments. Accident? I doubt it. The ideas portal has been alive for at least a few years now.