All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @PickleRick, thank you for your help! I tried to set the debug to false and to disable it but without success, do you think that i could comment all the input stanza? Thank you again. Ciao. ... See more...
Hi @PickleRick, thank you for your help! I tried to set the debug to false and to disable it but without success, do you think that i could comment all the input stanza? Thank you again. Ciao. Giuseppe
Thank you, but when I run the suggestion provided on a time period I know would not return any result, nothing shows up. I expected it provide me the entire list in makeresults. Sorry if I am missin... See more...
Thank you, but when I run the suggestion provided on a time period I know would not return any result, nothing shows up. I expected it provide me the entire list in makeresults. Sorry if I am missing something here or I don't understand your suggestion.
Hi, let me know if you were able to figure out.
A modular input must have a specification so that Splunk knows how to let you configure it https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/modinputsconfspec So you n... See more...
A modular input must have a specification so that Splunk knows how to let you configure it https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/modinputsconfspec So you need to have the confcheck_es_whsatever type of input defined. Check your .spec files for stanza _not_ commented out. If you don't have it - add it. Or remove inputs of this type altogether.
I forgot to add You could also use | timechart and work with the bin parameter to group events by time ranges. If you still wants to work with stats you can call the | bin command. see first examp... See more...
I forgot to add You could also use | timechart and work with the bin parameter to group events by time ranges. If you still wants to work with stats you can call the | bin command. see first example on this docs https://docs.splunk.com/Documentation/SCS/current/SearchReference/BinCommandExamples
Hi at all, I installed Enterprise Security 7.2.0 on Splunk 9.1.1 and I'm receiving the following message: Unable to initialize modular input "confcheck_es_bias_language_cleanup" defined in the app ... See more...
Hi at all, I installed Enterprise Security 7.2.0 on Splunk 9.1.1 and I'm receiving the following message: Unable to initialize modular input "confcheck_es_bias_language_cleanup" defined in the app "SplunkEnterpriseSecuritySuite": Unable to locate suitable script for introspection.. I searched on the documentation and at https://docs.splunk.com/Documentation/ES/7.2.0/Install/Upgradetonewerversion#After_upgrading_to_version_7.2.0 I fond the following indication:   To prevent the display of the error messages, follow these workaround steps: Modify following file: On the search head cluster: /opt/splunk/etc/shcluster/apps/SplunkEnterpriseSecuritySuite/README/input.conf.spec On a standalone ES instance this file: /opt/splunk/etc/apps/SplunkEnterpriseSecuritySuite/README/input.conf.spec Add the following comment at the end of the file: ###### Conf File Check for Bias Language ###### #[confcheck_es_bias_language_cleanup://default] #debug = <boolean> (optional) If you are on standalone search head, follow these additional steps: Push changes to search head cluster by pushing the bundle apps. Clean the messages from the top of the page so that they do not display again. . In case of a standalone search head, restart the Splunk process.   passing that in the page they are speaking of an upgrade and i'm newly installing, that the file name is wrong (input.conf instead inputs.conf) and that they say to modify a .spec file, but how commented statements can solve an issue? Obviously this solution didn't solved my issue. Is there anyone that can hint a solution to my issue? Thank you in avdance. Ciao. Giuseppe
I am using enterprise and it works without quotes with -7d@h
appendcols just puts two sets side by side without any kind of "matching" between those sets so first row of set B will be appended to first row of set A regardless of what order of events each of th... See more...
appendcols just puts two sets side by side without any kind of "matching" between those sets so first row of set B will be appended to first row of set A regardless of what order of events each of those sets had. I think you should rather simply append (not appendcols) those searches together and then do some form of stats by _time (or timechart again) to match data points from the same timestamp.
The error message says clearly "DNS lookup failed" which means that system resolver cannot determine reliably a hostname for the local IP. You can run some packet capture tool to verify what lookup e... See more...
The error message says clearly "DNS lookup failed" which means that system resolver cannot determine reliably a hostname for the local IP. You can run some packet capture tool to verify what lookup exactly is performed at the start of the UF.
OK. You wrote that you copied the buckets between indexers. But what are the definitions on search-heads? Indexers handle index-time operations (which are obviously not performed if the data is alrea... See more...
OK. You wrote that you copied the buckets between indexers. But what are the definitions on search-heads? Indexers handle index-time operations (which are obviously not performed if the data is already indexed) but your extractions are search-time so you should define them on SH-level.
Index as such doesn't do anything with the data. It just stores it. So if your data is transformed somehow it's up to your searches which generate the summaries - you have to seek there for answers.
There is a possible use case of searching throughout the whole 7pm-7am range if there is a possibility of an event indexing late (with a significant lag). While typically it signifies problems with d... See more...
There is a possible use case of searching throughout the whole 7pm-7am range if there is a possibility of an event indexing late (with a significant lag). While typically it signifies problems with data quality or problems with the processing pipeline, there are some ingestion schemes for which that can be a normal mode of operation (for example WEF in pull mode has 30minutes interval by default if I remember correctly). In such case you can manipulate your time range similarily to earliest=@d+19h You should even be able to do (but I haven't tested it since I don't have a Splunk instance available at the moment) something like earliest=-12h@d+19h Fiddle with this and check if it's what you need But if your data is ingested with a constant flow then you should be ok with monitoring just most recently ingested part as @richgalloway said. Either use a searching window slightly longer than your scheduled interval in order not to miss any slightly lagged events or use continuous schedule.
While there are some use cases where you can have a host field set to a particular metadata value in case it's not specified with the event (as has been already said in this thread) it works by injec... See more...
While there are some use cases where you can have a host field set to a particular metadata value in case it's not specified with the event (as has been already said in this thread) it works by injecting the extracted metadata into one of the standard fields. In general there is no way to retain additional metadata with the event so if the sender specifies the host explicitly (and it's thus not generated by the input) Splunk has no way of keeping track of source ip/hostnames. The same in fact goes for any other input. If you're receiving data on a network port, unless you capture the source ip in host field (which might get extracted and overwritten later from the message body) you have no way of knowing the source address (that's one of the advantages of custom syslog receiving mechanisms.
Question is why do you want to delete the events in the first place. As a general rule, events in Splunk are not deletable. Yes, there is a delete command but it doesn't remove the events from the b... See more...
Question is why do you want to delete the events in the first place. As a general rule, events in Splunk are not deletable. Yes, there is a delete command but it doesn't remove the events from the buckets it just marks them as unsearcheable. But. If you need to capture just a transient state which needs to be updated often and you don't care about previous states either search for particular instance of your results (for example by creating a summary with  a counter and incrementing said counter in subsequent generations of your summaries) or use a lookup instead of a summary index.
Hi @LearningGuy, the approach is correct, but only you can know if the search before the delete command is correct. about the Users menu, probably you're working on a Search Head Cluster, so normal... See more...
Hi @LearningGuy, the approach is correct, but only you can know if the search before the delete command is correct. about the Users menu, probably you're working on a Search Head Cluster, so normal users cannot modify users roles, contact an admin to make this action . Only one additiona information that I forgor in my first answer: the delete command runs a logical deletion, not a physical deletion, in other words, it marks the events as "deleted" and you don't see them more but they remain in the buckets until the bucket is discarded. Ciao. Giuseppe
Hi @Lax , as me and @ITWhisperer said: if you could share some samples of your logs we could be more detailed. The final parte of the search will surely be the tats command, but how to arrive to it... See more...
Hi @Lax , as me and @ITWhisperer said: if you could share some samples of your logs we could be more detailed. The final parte of the search will surely be the tats command, but how to arrive to it depends on how data are in your logs. We need of samples to understand how to separate eventual multiple values in single values to group using stats. Ciao. Giuseppe
As mentioned before see the inputs.conf for the HEC stanza: https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/Inputsconf#http:_.28HTTP_Event_Collector.29 You can set at the event level (which... See more...
As mentioned before see the inputs.conf for the HEC stanza: https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/Inputsconf#http:_.28HTTP_Event_Collector.29 You can set at the event level (which is the way that takes precedence) or you could set using connection host.
You only need to report if an event arrived since the last time the search ran.  If an event came in earlier then the previous run of the search would have found it.  So, run every 15 minutes and use... See more...
You only need to report if an event arrived since the last time the search ran.  If an event came in earlier then the previous run of the search would have found it.  So, run every 15 minutes and use earliest=-15m or run once at 7am and use earliest=-12h or something in between.
Have you tried something like | stats sum(storage) by _time  
I was able to resolve the issue. in the _internal index, the following events were generated. I used this to determine which index Splunk wanted to sort the events into, and created it. Search p... See more...
I was able to resolve the issue. in the _internal index, the following events were generated. I used this to determine which index Splunk wanted to sort the events into, and created it. Search peer mysplunkidxs.splunkcloud.com has the following message: Redirected event for unconfigured/disabled/deleted index=intended_index with source="source::1234" host="host::abc" sourcetype="sourcetype::456:efg" into the LastChanceIndex. So far received events from 1 missing index(es).