All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

So this issue was being caused by an fapolicyd deny all rule. Once I moved the rule out of the /etc/fapolicyd/rules.d, it let me upgrade Splunk.
Thank you, after my investigation of the problem, this is a super sparse search. I need to add IOPS to solve this problem. I raised the IOPS to 25,000. The search speed has changed amazing. It's done!
Thank you for your reply. I have 1 billion incidents every day ingesting the SPLUNK indexer. I checked the monitoring console. I didn't seem to see any abnormalities.
Thank you, I tried searching with term command, but search speed is still slow  
Hi @Cheng2Ready Yes, you just have to split each line of the field as a separate event, then you can use stats last to grab the last line: index=example "House*" Message=* | makemv Message | mvexpa... See more...
Hi @Cheng2Ready Yes, you just have to split each line of the field as a separate event, then you can use stats last to grab the last line: index=example "House*" Message=* | makemv Message | mvexpand Message | stats last(Message) as last_line
cloud instance 
They're on the same network, they're using intranet bandwidth, and they have 100MB bandwidth.
There is no Pattern or punctuation so running Regex might not work in this situation since I cant know what kind of Error or pattern will appear in the final line/sentence in the field. the last sen... See more...
There is no Pattern or punctuation so running Regex might not work in this situation since I cant know what kind of Error or pattern will appear in the final line/sentence in the field. the last sentence can be anything and unpredictable so just wanted to see if there is a way to grab the last line of log that is in the field. This example most likely wont help but paints a picture that I just want the last line. index=example |search "House*" |table Message log looks similar like this: Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example /local/line499 D://example ......a bunch of sensative information D://example /crab/lin650 D://example ......a bunch of sensative information D://user/local/line500 Next example: Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information Error : someone stepped on the wire. Next example: Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://user/local/line980 ,indo Next example: Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information Error : Simon said Look Goal: D://user/local/line500 Error : someone stepped on the wire. D://user/local/line980 ,indo Error : Simon said Look  I hope this makes sense....
From the screen shot, you have started the ALD session and the Summary showed you have successfully started  & stop CollectionCapture... unfortunately, no Java Collections were eligible for evaluatio... See more...
From the screen shot, you have started the ALD session and the Summary showed you have successfully started  & stop CollectionCapture... unfortunately, no Java Collections were eligible for evaluation. If you look at the middle section of the screen, it gives the explanation/details. To qualify for evaluation, a Collection must have a certain size and elements.  2 App Server Agent settings/parameters are mentioned: minimum-size-for-evaluation-in-mb - The default value is 5MB(I think). Depending on your application, you may want to increase of decrease this value. minimum-number-of-elements-in-collection-to-deep-size - The default is 1000 elements, maybe  large for an application(?). If your application is small and we’re not sure if any collection has about 1000 elements, we can try lowering this value. Next, is Start On Demand Capture Session: If Session Duration is too small, we may not have sufficient time window to capture those Collections (objects/classes). If the default 10 mins shows nothing, then try 15. If Collection Age is too small, this means the Collection is too “young” and hence the size may not be enough as candidate for evaluation. Go with default 2 mins. If all the criteria are good, you should see something below: thanks.
Do you have the link to this extensions?. I cannot find it.
Hi Joe, yes, you can download the app, patch it and upload it as a private app. Cheers, Andreas
Basically what I'm looking for is, I have a Multi Select Server input, if i select 5 servers which are belongs to 3 goes to US and 2 go to UK, I want it to have two panels. The US panel shows the cli... See more...
Basically what I'm looking for is, I have a Multi Select Server input, if i select 5 servers which are belongs to 3 goes to US and 2 go to UK, I want it to have two panels. The US panel shows the clients (3 total). Whereas UK panel shows the identical thing, but only the 2 clients  How do I achieve this ?
I tried those but if i select multiple index the sourcetype multi select is not working. 
Exactly. Disabling default apps is a bit tricky. Combine this with the 9.1.x version running a search on every instance and then an automated method like this helps on large indexer clusters, cluste... See more...
Exactly. Disabling default apps is a bit tricky. Combine this with the 9.1.x version running a search on every instance and then an automated method like this helps on large indexer clusters, cluster managers et cetera. The application works on most apps, I did find the splunk assist app cannot be disabled using the REST API. Thanks @isoutamo
The below log entry includes different format within it. Not sure how to write props.conf for proper field extractions and line breaking. each log entry has text, delimitter(|) and json. 2024-03-11T... See more...
The below log entry includes different format within it. Not sure how to write props.conf for proper field extractions and line breaking. each log entry has text, delimitter(|) and json. 2024-03-11T20:58:12.605Z [INFO] SessionManager sgrp:System_default swn:99999 sreq:1234567 | {"abrMode":"NA","abrProto":"HLS","event":"Create","sUrlMap":"","sc":{"Host":"x.x.x.x","OriginMedia":"HLS","URL":"/x.x.x.x/vod/Test-XXXX/XXXXX.smil/transmux/XXXXX"},"sm":{"ActiveReqs":0,"ActiveSecs":0,"AliveSecs":360,"MediaSecs":0,"SpanReqs":0,"SpanSecs":0},"swnId":"XXXXXXXX","wflow":"System_default"} 2024-03-11T20:58:12.611Z [INFO] SessionManager sgrp:System_default swn:99999 sreq:1234567 | {"abrMode":"NA","abrProto":"HLS","event":"Cache","sUrlMap":"","sc":{"Host":"x.x.x.x","OriginMedia":"HLS","URL":"/x.x.x.x/vod/Test-XXXXXX/XXXXXX.smil/transmux/XXX"},"sm":{"ActiveReqs":0,"ActiveSecs":0,"AliveSecs":0,"MediaSecs":0,"SpanReqs":0,"SpanSecs":0},"swnId":"XXXXXXXXXXXXX","wflow":"System_default"}
This answer https://community.splunk.com/t5/Dashboards-Visualizations/How-to-use-token-in-a-multi-select-form-input/m-p/480570 is close to what you want.  You would end up with a set of sourcetype... See more...
This answer https://community.splunk.com/t5/Dashboards-Visualizations/How-to-use-token-in-a-multi-select-form-input/m-p/480570 is close to what you want.  You would end up with a set of sourcetype=data1 OR sourcetype=data2 etc. And you can initialize the default value with comma separated values as shown in https://community.splunk.com/t5/Dashboards-Visualizations/choose-all-Multiselect-values-by-default-without-using/m-p/357860
Hello, need help for auto multi select of the input values... So I have a Index values like data1, data2, data3. If I select data1 the sourcetype related to data1 should be auto selected, if i ... See more...
Hello, need help for auto multi select of the input values... So I have a Index values like data1, data2, data3. If I select data1 the sourcetype related to data1 should be auto selected, if i multislect data1 & data2 in the index it has to auto select in multi sourcetype
I assume you have logs that explicitly say whether the website is up or down. If so, you could make a new alert which compares the website status in the past X minutes until the time of the search (w... See more...
I assume you have logs that explicitly say whether the website is up or down. If so, you could make a new alert which compares the website status in the past X minutes until the time of the search (when the website is up) versus the website status between 2X and X minutes ago. Let's assume a time window of 5 minutes, and you can set the schedule of the alert to be every 1-5 minutes depending on how responsive you would like the alert to be. (I recommend throttling the alert if you make it fewer than 5 minutes)   <search filters for website status=ok> earliest=-5m | append [<search for website status = NOT OK> earliest=-10m latest=-5m] | stats values(status) as status dc(status) as dcstatus by website | where dcstatus > 2   This should only find websites where 10-5 minutes ago, it was down 5-0 minutes ago, it is up
I have an alert that can clear in the same minute that it originally fired.  When the correlation search runs, both events are in it, the alert and the clearing alert.  The correlation search creates... See more...
I have an alert that can clear in the same minute that it originally fired.  When the correlation search runs, both events are in it, the alert and the clearing alert.  The correlation search creates notable events for each but uses the current time for the _time for the notable events and not the _time from the original alerts.  Since both alerts are converted into notable events during the same correlation search run, they get the exact same timestamp.  This causes ITSI to not definitely know the correct order of the events and it sometimes thinks the Normal/Clear event came BEFORE the original alert. This seems odd to me.  I would have imagined that ITSI would use the original event time as the _time for the notable event but it doesn't. Any ideas on how to address?