All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have the slack integration hooked up to Splunk On-call I would like to trigger a Splunk On-call alert when the Slack usergroup is used. How do I go about setting that up if anyone knows? Thank... See more...
I have the slack integration hooked up to Splunk On-call I would like to trigger a Splunk On-call alert when the Slack usergroup is used. How do I go about setting that up if anyone knows? Thank you in advance
Load the Monitoring Console Indexing -> Performance -> Indexing Performance: Instance Select various Indexers in your cluster to compare - If various Indexers have massively different queue values... See more...
Load the Monitoring Console Indexing -> Performance -> Indexing Performance: Instance Select various Indexers in your cluster to compare - If various Indexers have massively different queue values then you may have a data imbalance, since UF's by default stick to an ingestion queue for 30 seconds you should observe this over time. - If all queues left to the right are full then this is a disk write issue, the indexer can't write to disk fast enough. - You can via .conf settings override default indexer queue and pipeline settings to increase available size, but you should be very confident in your admin abilities and I don't recommend this for novice administrators.  Working with Splunk support is recommended regardless of your experience novice or advanced. 
Please share the rest of the configuration e.g. the data source with the search being used
Hi All, We are getting below error message ITSI rules_engine: "ErrorMessage="One or more fields are missing to create episode state." in splunk which is stopping episode creation for some of the ev... See more...
Hi All, We are getting below error message ITSI rules_engine: "ErrorMessage="One or more fields are missing to create episode state." in splunk which is stopping episode creation for some of the events. However when we check the search results there are no null or empty field values for respective fields. Please help me to fix this ASAP with a detailed steps. Thanks in Advance to all.
Done
Correct, it works now. Can you please edit your answer? I'll mark it as solution after that.   Thanks a lot!
Hi Splunk,  I created a dashboard with various panels. Some of the panels are tables with drilldown searches allowing you to click on the value, and opening a new tab using the value clicked on ($... See more...
Hi Splunk,  I created a dashboard with various panels. Some of the panels are tables with drilldown searches allowing you to click on the value, and opening a new tab using the value clicked on ($row.user.value$) in the new search.  However, for some reason the drilldown on one panel opens the search without populating the variable: $row.user.value$ All the other panels' drilldown searches work. Source code of panel:   { "type": "splunk.table", "options": { "count": 100, "dataOverlayMode": "none", "drilldown": "none", "showRowNumbers": false, "showInternalFields": false }, "dataSources": { "primary": "ds_aaaa" }, "title": "Panel One (Last 30 Days)", "eventHandlers": [ { "type": "drilldown.linkToSearch", "options": { "query": "index=\"winlog\" EventCode=4625 user=$row.user.value$", "earliest": "auto", "latest": "auto", "type": "custom", "newTab": true } } ], "context": {}, "showProgressBar": false, "showLastUpdated": false }   The SPL after clicking on the table value: index="winlog" EventCode=4625 user=$row.user.value$ Why does the $row.user.value$ not populate?
Here's an example using fieldsummary. Replace the base search and field list as needed. To summarize all fields, remove the field list. Note, however, that SplunkWeb doesn't handle results with many ... See more...
Here's an example using fieldsummary. Replace the base search and field list as needed. To summarize all fields, remove the field list. Note, however, that SplunkWeb doesn't handle results with many columns as well as it does results with many rows. index=_internal source=*splunkd.log* ``` get top 3 values discarding ties ``` | fieldsummary maxvals=3 host log_level component | fields field values | eval values=json_array_to_mv(values) ``` use stats as an alternative to mvexpand ``` | stats count by field values | fields - count | spath input=values | fields - values ``` uncomment expensive sort to show values in decreasing count order ``` ``` | sort 0 - count ``` ``` beware of field name conflicts ``` ``` field names that conflict with values or counts will be overwritten ``` | eval {field}=value ``` using count_field instead of field_count helps sort columns ``` | eval {field}_count=count ``` replace row_key with another name if you already have a field named row_key ``` | streamstats global=f count as row_key by field | fields - field value count | stats values(*) as * by row_key | fields - row_key
Ahhhh. Right. You need to do series=short with timewrap to have that s0, s1 and so on.
I'm familiar with the settings above. I use no_priority_stripping on all udp inputs because I have some logic with PRI codes processing.
Yes you are right, but I tried it correctly before that. What version of Splunk are you running? I'm on 9.1.1 and that's not how timewrap names the columns for me.   _time event_count_1week_bef... See more...
Yes you are right, but I tried it correctly before that. What version of Splunk are you running? I'm on 9.1.1 and that's not how timewrap names the columns for me.   _time event_count_1week_before event_count_latest_week XXXX YYYY ZZZZ     That's how it does it. If I have an span of more than 2 weeks then it will create another column ended like *_2weeks_before   So I changed it to something like this but still, empty output.   | tstats prestats=t `summariesonly` count from datamodel="Web" where sourcetype="f5:bigip:ltm:http:irule" by _time Web.site span=10m | timechart span=10m count as event_count by Web.site useother=false limit=5 |timewrap 1w | foreach *_latest_week [ eval <<MATCHSTR>>_combined=<<MATCHSTR>>_latest_week."|".<<MATCHSTR>>_1week_before_week ] | fields _time *_combined | untable _time Web.series values | eval values=split(values,"|") | eval old=mvindex(values,0), new=mvindex(values,1) | fields - values  
I got the query that we need to use dedup  thanks anyway.
This query worked but I have found one issue that its taking duplicate values in dashboard if we run it again is there any way that we can avoid old value if we run multiple times in lesser time?  
Right. I seem to have skipped the timewrap while copying from my test environment. But you miscopied the foreach. The timewrap creates multiple series based on the same name with suffixes _s0, _s1 ... See more...
Right. I seem to have skipped the timewrap while copying from my test environment. But you miscopied the foreach. The timewrap creates multiple series based on the same name with suffixes _s0, _s1 and so on (if there are more spans). That's what the foreach relies on. You can't just arbitrarily cut it to *_  
For HA DS setups you usually use DNS-based load-balancing/fail-over. It's relatively easy because - again - you usually don't care much about the DS's state. With HFs which are _only_ a parsing tier... See more...
For HA DS setups you usually use DNS-based load-balancing/fail-over. It's relatively easy because - again - you usually don't care much about the DS's state. With HFs which are _only_ a parsing tier, it's also relatively trivial - you just set up multiple HFs and LB your traffic to them on source UFs (or HTTP LB if you're using HEC). It's getting way more problematic if you want to _pull_ some data with a modular inputs on HF (like DBConnect, some addons for cloud services and such). That's getting tricky. I think there was even a .conf presentation about HF replication but I can't find it at the moment.
OK. That's interesting. I was pretty sure it worked as I wrote but it apparently doesn't From my tests it seems that even if I specify a completely non-existent sourcetype, Splunk does break events ... See more...
OK. That's interesting. I was pretty sure it worked as I wrote but it apparently doesn't From my tests it seems that even if I specify a completely non-existent sourcetype, Splunk does break events on udp:// input on a syslog priority header. And indeed does ingest event only after receiving <PRI> from the next event. That's strange. And that's one more reason for _not_ listening for syslog directly on the indexer/HF. EDIT:  I haven't tested it yet but I suppose it can have something to do with: no_priority_stripping = <boolean> * Whether or not the input strips <priority> syslog fields from events it receives over the syslog input. * A value of "true" means the instance does NOT strip the <priority> syslog field from received events. * NOTE: Do NOT set this setting if you want to strip <priority>. * Default: false no_appending_timestamp = <boolean> * Whether or not to append a timestamp and host to received events. * A value of "true" means the instance does NOT append a timestamp and host to received events. * NOTE: Do NOT set this setting if you want to append timestamp and host to received events. * Default: false
Hi @Vnarunart , this is a question for a certified Splunk Architect or a Splunk PS, not for the Community. Anyway, as also @PickleRick said, there are no problems for the DS because your infrastruc... See more...
Hi @Vnarunart , this is a question for a certified Splunk Architect or a Splunk PS, not for the Community. Anyway, as also @PickleRick said, there are no problems for the DS because your infrastructure continues to run also without it, the only issue, in case of DR, is that you cannot performa a forwarders configuration update, so you don't need to put it in the DR site, or you can use a passive DS. It's different for the HF, because you should analyze which data flows pass through the HF and then configure a Load Balancer to send the traffic also to the DR HF, but, as I said, this is an architectural analysis and it's difficoult to perform in Community. Ciao. Giuseppe
Thank you @PickleRick , we'll try and use rsyslog instead of Splunk to forward the logs and let you know if we solved the issue. Can you please tell me what do you think about the duplicate events i... See more...
Thank you @PickleRick , we'll try and use rsyslog instead of Splunk to forward the logs and let you know if we solved the issue. Can you please tell me what do you think about the duplicate events in the index? What should I investigate?   Thank you, Andrea
One additional question. Do you know any free products that can act like a syslog server (listen to udp\tcp and store files) but working on Windows?
Thanks for your answer. Running it like you provided it (plus adding the timewrap I think you forgot) It doesn't provide any output, I removed the last where just to troubleshoot it. I tried replacin... See more...
Thanks for your answer. Running it like you provided it (plus adding the timewrap I think you forgot) It doesn't provide any output, I removed the last where just to troubleshoot it. I tried replacing your s0 and s1like this but it but again output is empty.   | tstats prestats=t `summariesonly` count from datamodel="Web" where sourcetype="f5:bigip:ltm:http:irule" by _time Web.site span=10m | timechart span=10m count as event_count by Web.site useother=false limit=0 |timewrap 1w | foreach *_ [ eval <<MATCHSTR>>_combined=<<MATCHSTR>>_latest_week."|".<<MATCHSTR>>_1week_before_week ] | fields _time *_combined | untable _time Web.series values | eval values=split(values,"|") | eval old=mvindex(values,0), new=mvindex(values,1) | fields - values   It looks like a complex workaround for something that sounds like a pretty standard use case to me. Do you know any other simpler way of doing this?   Thanks again!