All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What is your Splunk configuration to listen for UDP 5514?
Hello @rrovers, The layout in the dashboard studio can be set to either absolute or grid. However, there is currently no option to set dynamic height and width of the table based on number of rows. ... See more...
Hello @rrovers, The layout in the dashboard studio can be set to either absolute or grid. However, there is currently no option to set dynamic height and width of the table based on number of rows. Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated..!! 
Hi everyone, I'm working on a Splunk query to analyze API request metrics, and I want to avoid using a join as it is making my query slow. The main challenge is that I need to aggregate multiple me... See more...
Hi everyone, I'm working on a Splunk query to analyze API request metrics, and I want to avoid using a join as it is making my query slow. The main challenge is that I need to aggregate multiple metrics (like min, max, avg, and percentiles) and pivot HTTP status codes (S) into columns, but the current approach withxyseries is dropping additional values: Min, Max, Avg, P95, P98, P99 The reason why using xyseries - it generates columns dynamically so that my result will contain only available statuses from many available and it count accordingly . Here’s the original working query with join: index=sample_index sourcetype=kube:container:sample_container | fields U, S, D | where isnotnull(U) and isnotnull(S) and isnotnull(D) | rex field=U "(?P<ApiName>[^/]+)(?=\/[0-9a-fA-F\-]+$|$)" | stats count as TotalReq, by ApiName, S | xyseries ApiName S, TotalReq | addtotals labelfield=ApiName col=t label="ColumnTotals" fieldname="TotalReq" | join type=left ApiName [ search index=sample_index sourcetype=kube:container:sample_container | fields U, S, D | where isnotnull(U) and isnotnull(S) and isnotnull(D) | rex field=U "(?P<ApiName>[^/]+)(?=\/[0-9a-fA-F\-]+$|$)" | stats min(D) as Min, max(D) as Max, avg(D) as Avg, perc95(D) as P95, perc98(D) as P98, perc99(D) as P99 by ApiName] | addinfo | eval Availability% = round(100 - ('500'*100/TotalReq), ‌ ‌ | fillnull value=100 Availability% | eval range = info_max_time - info_min_time | eval AvgTPS=round(TotalReq/range,5) | eval Avg=floor(Avg) | eval P95=floor(P95) | eval P98=floor(P98) | eval P99=floor(P99) | sort TotalReq | table ApiName, 1*, 2*, 3*, 4*, 5*, Min, Max, Avg, P95, P98, P99, AvgTPS, Availability%, TotalReq I attempted to optimize it by combining the metrics calculation into a single stats command and usingeventstats or streamstats to calculate the additional statistics without dropping the required fields.  Also providing additional metrics with xyseries as below but did not help. PS: Tried with chatGPT did not help. so seeking help from real experts   | stats count as TotalReq, min(D) as Min, max(D) as Max, avg(D) as Avg, perc95(D) as P95, perc98(D) as P98, perc99(D) as P99 by ApiName, S | xyseries ApiName S, TotalReq, Min, Max, Avg, P95, P98, P99
I have a playbook setup to run on all events in a 10minute_timer label using the Timer app. These events do not contain artifacts. I've noticed the playbook runs fine when testing on a test_event th... See more...
I have a playbook setup to run on all events in a 10minute_timer label using the Timer app. These events do not contain artifacts. I've noticed the playbook runs fine when testing on a test_event that contains an artifact. When I moved it over to run on the timer label it dies when it gets to my filter block. I've also run the exact same playbook on an event in my test_label which also didn't contain an artifact and that too fails. I've tested it without the filter block and used a decision instead, that works fine. Both blocks share the same Scope in the Advanced settings drop down. My conditions are fine in the filter block and should evaluate to True, I added a test condition on the label name to make sure of this and even that is not triggering.  I think this may be a bug, I'm open to being wrong but not sure what else I can do to test it.    Thanks I believe this is a bug with SOAR. 
We use splunk for creating reports. When I insert a table in dashboard studio I have to define a width and height for it. But the height should be different for each period we run the dashboard beca... See more...
We use splunk for creating reports. When I insert a table in dashboard studio I have to define a width and height for it. But the height should be different for each period we run the dashboard because the number of rows can be different per period. How can I do this without changing the layout every month?
I found the problem, when Splunk was installed it got installed as a heavy forwarder., so it was looking for the next indexer.     I deleted outputs.conf,  restarted Splunk and it started working.
I did connect MySQL to Splunk using DBConnect but not on the Universal Forwarder, i do not know how I can connect a DB on UF. Also, I am still figuring out how I can send the audit logs for connected... See more...
I did connect MySQL to Splunk using DBConnect but not on the Universal Forwarder, i do not know how I can connect a DB on UF. Also, I am still figuring out how I can send the audit logs for connected DB using Universal Forwarder
We have different lookup inputs into the Splunk ES asset list framework. Some values for assets change over time, for example due to DHCP og DNS renaming. When an asset gets a new IP due to e.g. DHCP... See more...
We have different lookup inputs into the Splunk ES asset list framework. Some values for assets change over time, for example due to DHCP og DNS renaming. When an asset gets a new IP due to e.g. DHCP, the lookup used as input into the asset framework is updated accordingly, but the merged asset lookup "asset_lookup_by_str" will contain both the new and the old IP. So the new IP is appended on the asset, it's not replacing the old IP. Due to "merge magic" that runs under the hood in the asset framework, over time this creates strange assets with many DNS names and many IPs. My question is, how long are asset list field values stored in the Splunk ES asset list framework? Are there any hidden values that keep track of say an IP, and will Splunk eventually remove the IP from the asset in the merged list? Or will the IP stay there forever, and these "multivalue assets" will thus just grow with more and more DNS names and IPs until the mv field limits are reached? And, if I reduce the asset list mv field limits, how does Splunk prioritize what values will be included or not? Does the values already on the merged list have priority, or does any new values have priority? Tried looking for answers in the documentation but could not find answers on my questions there. Hoping someone will share some insights here. Thanks!
Hello, it was not confirmed previously, but it appeared unlikely at the time. Previously, the issue persisted even after I changed the schedule from 2 7 * * * to 2,27 7 * * * and later even 2 7,19 *... See more...
Hello, it was not confirmed previously, but it appeared unlikely at the time. Previously, the issue persisted even after I changed the schedule from 2 7 * * * to 2,27 7 * * * and later even 2 7,19 * * * which required UF restarts at different times of day. While time sync does occur it doesn't occur often enough to have affected all of these attempts. Today, I double checked one of the systems more consistently affected (index=<WindowsLogs> host=<REDACT> EventCode=4616 4616 NewTime) and found a time synchronization did not occur around the time the issue manifested especially at the time of a UF service restart.
I have setup splunk, the machine has 15:26 as local time, but when I check splunkd.log time it is 20:26.   why is there a difference in time b/w local time and splunkd.log time?
You have too many searches try to run at the same time.  That means some searches have to wait (are delayed) until a search slot becomes available.  Use the Scheduled Searches dashboard in the Cloud ... See more...
You have too many searches try to run at the same time.  That means some searches have to wait (are delayed) until a search slot becomes available.  Use the Scheduled Searches dashboard in the Cloud Monitoring Console to see which times have the most delays and reschedule some of the searches that run at those times.
To refer to a field in an event, use single quotes around the field name.  Dollar signs refer to tokens, which are not part of an event. | `filter_maintenance_services('fields.ServiceID')`
Hi, I am a rookie in SPL and I have this general correlation search for application events: index="foo" sourcetype="bar" (fields.A="something" "fields.B"="something else") If this was a applicatio... See more...
Hi, I am a rookie in SPL and I have this general correlation search for application events: index="foo" sourcetype="bar" (fields.A="something" "fields.B"="something else") If this was a application specific search I could just specify the service in the search. But what I want to achieve is to use a service id from event rather than a fixed value to suppress results for that specific service. If I append  | `filter_maintenance_services("e5095542-9132-402f-8f17-242b83710b66")` to the search it works but if I use the event data service id it does not. Ex.  | `filter_maintenance_services($fields.ServiceID$)` I suspect that it has to do with  fields.ServiceID not being populated when the filter is deployed. How can get this to work?  
our Splunk received logs from Vmware workspace one (mobile device management (MDM)) as syslog messages. what is the source type  needed to be configured in inputs.conf or is there any addon to assis... See more...
our Splunk received logs from Vmware workspace one (mobile device management (MDM)) as syslog messages. what is the source type  needed to be configured in inputs.conf or is there any addon to assist In parsing? 
Thanks both of you - both work :-0)
Hi Hi Team, I am getting the below error message on my splunk ES search head. Is there any troubleshooting I can perform on the splunk web to correct this. Please help. PS. I don't have access to ... See more...
Hi Hi Team, I am getting the below error message on my splunk ES search head. Is there any troubleshooting I can perform on the splunk web to correct this. Please help. PS. I don't have access to the backend.  
Thx Giuseppe!
Thank you. I will use it as a reference. 
The upside to the Splunk-supported add-ons is that they have decent documentation. In this case it's https://splunk.github.io/splunk-add-on-for-palo-alto-networks/
Dynamic Alert recipient for test in detector mainly using custom properties in alert recipients tab in detectors. unable to crack that!