All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Would you be able to set up an app or custom parser for me.  Thanks...
It's not about disabling it because then the input is still defined, just disabled. So you'd probably nees to edit the default/ files to remove the stanza altogether which of course is a bad idea. So... See more...
It's not about disabling it because then the input is still defined, just disabled. So you'd probably nees to edit the default/ files to remove the stanza altogether which of course is a bad idea. So I'd go for fixing the spec file.
Thank you, but could you share an example of it please.
You could write a custom command to connect to the confluence api to post an update to a page using the events in the events pipeline (I have done something similar to this before but it is not somet... See more...
You could write a custom command to connect to the confluence api to post an update to a page using the events in the events pipeline (I have done something similar to this before but it is not something I can easily share).
Which version of Splunk are you running as the format and data options to makeresults were introduced in version 9
Hi @Anusree,   I’m a Community Moderator in the Splunk Community. This question was posted 2 years ago, so it might not get the attention you need for your question to be answered. We recomme... See more...
Hi @Anusree,   I’m a Community Moderator in the Splunk Community. This question was posted 2 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post.   Thank you!
Hi @PickleRick, thank you for your help! I tried to set the debug to false and to disable it but without success, do you think that i could comment all the input stanza? Thank you again. Ciao. ... See more...
Hi @PickleRick, thank you for your help! I tried to set the debug to false and to disable it but without success, do you think that i could comment all the input stanza? Thank you again. Ciao. Giuseppe
Thank you, but when I run the suggestion provided on a time period I know would not return any result, nothing shows up. I expected it provide me the entire list in makeresults. Sorry if I am missin... See more...
Thank you, but when I run the suggestion provided on a time period I know would not return any result, nothing shows up. I expected it provide me the entire list in makeresults. Sorry if I am missing something here or I don't understand your suggestion.
Hi, let me know if you were able to figure out.
A modular input must have a specification so that Splunk knows how to let you configure it https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/modinputsconfspec So you n... See more...
A modular input must have a specification so that Splunk knows how to let you configure it https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/modinputsconfspec So you need to have the confcheck_es_whsatever type of input defined. Check your .spec files for stanza _not_ commented out. If you don't have it - add it. Or remove inputs of this type altogether.
I forgot to add You could also use | timechart and work with the bin parameter to group events by time ranges. If you still wants to work with stats you can call the | bin command. see first examp... See more...
I forgot to add You could also use | timechart and work with the bin parameter to group events by time ranges. If you still wants to work with stats you can call the | bin command. see first example on this docs https://docs.splunk.com/Documentation/SCS/current/SearchReference/BinCommandExamples
Hi at all, I installed Enterprise Security 7.2.0 on Splunk 9.1.1 and I'm receiving the following message: Unable to initialize modular input "confcheck_es_bias_language_cleanup" defined in the app ... See more...
Hi at all, I installed Enterprise Security 7.2.0 on Splunk 9.1.1 and I'm receiving the following message: Unable to initialize modular input "confcheck_es_bias_language_cleanup" defined in the app "SplunkEnterpriseSecuritySuite": Unable to locate suitable script for introspection.. I searched on the documentation and at https://docs.splunk.com/Documentation/ES/7.2.0/Install/Upgradetonewerversion#After_upgrading_to_version_7.2.0 I fond the following indication:   To prevent the display of the error messages, follow these workaround steps: Modify following file: On the search head cluster: /opt/splunk/etc/shcluster/apps/SplunkEnterpriseSecuritySuite/README/input.conf.spec On a standalone ES instance this file: /opt/splunk/etc/apps/SplunkEnterpriseSecuritySuite/README/input.conf.spec Add the following comment at the end of the file: ###### Conf File Check for Bias Language ###### #[confcheck_es_bias_language_cleanup://default] #debug = <boolean> (optional) If you are on standalone search head, follow these additional steps: Push changes to search head cluster by pushing the bundle apps. Clean the messages from the top of the page so that they do not display again. . In case of a standalone search head, restart the Splunk process.   passing that in the page they are speaking of an upgrade and i'm newly installing, that the file name is wrong (input.conf instead inputs.conf) and that they say to modify a .spec file, but how commented statements can solve an issue? Obviously this solution didn't solved my issue. Is there anyone that can hint a solution to my issue? Thank you in avdance. Ciao. Giuseppe
I am using enterprise and it works without quotes with -7d@h
appendcols just puts two sets side by side without any kind of "matching" between those sets so first row of set B will be appended to first row of set A regardless of what order of events each of th... See more...
appendcols just puts two sets side by side without any kind of "matching" between those sets so first row of set B will be appended to first row of set A regardless of what order of events each of those sets had. I think you should rather simply append (not appendcols) those searches together and then do some form of stats by _time (or timechart again) to match data points from the same timestamp.
The error message says clearly "DNS lookup failed" which means that system resolver cannot determine reliably a hostname for the local IP. You can run some packet capture tool to verify what lookup e... See more...
The error message says clearly "DNS lookup failed" which means that system resolver cannot determine reliably a hostname for the local IP. You can run some packet capture tool to verify what lookup exactly is performed at the start of the UF.
OK. You wrote that you copied the buckets between indexers. But what are the definitions on search-heads? Indexers handle index-time operations (which are obviously not performed if the data is alrea... See more...
OK. You wrote that you copied the buckets between indexers. But what are the definitions on search-heads? Indexers handle index-time operations (which are obviously not performed if the data is already indexed) but your extractions are search-time so you should define them on SH-level.
Index as such doesn't do anything with the data. It just stores it. So if your data is transformed somehow it's up to your searches which generate the summaries - you have to seek there for answers.
There is a possible use case of searching throughout the whole 7pm-7am range if there is a possibility of an event indexing late (with a significant lag). While typically it signifies problems with d... See more...
There is a possible use case of searching throughout the whole 7pm-7am range if there is a possibility of an event indexing late (with a significant lag). While typically it signifies problems with data quality or problems with the processing pipeline, there are some ingestion schemes for which that can be a normal mode of operation (for example WEF in pull mode has 30minutes interval by default if I remember correctly). In such case you can manipulate your time range similarily to earliest=@d+19h You should even be able to do (but I haven't tested it since I don't have a Splunk instance available at the moment) something like earliest=-12h@d+19h Fiddle with this and check if it's what you need But if your data is ingested with a constant flow then you should be ok with monitoring just most recently ingested part as @richgalloway said. Either use a searching window slightly longer than your scheduled interval in order not to miss any slightly lagged events or use continuous schedule.
While there are some use cases where you can have a host field set to a particular metadata value in case it's not specified with the event (as has been already said in this thread) it works by injec... See more...
While there are some use cases where you can have a host field set to a particular metadata value in case it's not specified with the event (as has been already said in this thread) it works by injecting the extracted metadata into one of the standard fields. In general there is no way to retain additional metadata with the event so if the sender specifies the host explicitly (and it's thus not generated by the input) Splunk has no way of keeping track of source ip/hostnames. The same in fact goes for any other input. If you're receiving data on a network port, unless you capture the source ip in host field (which might get extracted and overwritten later from the message body) you have no way of knowing the source address (that's one of the advantages of custom syslog receiving mechanisms.
Question is why do you want to delete the events in the first place. As a general rule, events in Splunk are not deletable. Yes, there is a delete command but it doesn't remove the events from the b... See more...
Question is why do you want to delete the events in the first place. As a general rule, events in Splunk are not deletable. Yes, there is a delete command but it doesn't remove the events from the buckets it just marks them as unsearcheable. But. If you need to capture just a transient state which needs to be updated often and you don't care about previous states either search for particular instance of your results (for example by creating a summary with  a counter and incrementing said counter in subsequent generations of your summaries) or use a lookup instead of a summary index.