All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I found the problem, when Splunk was installed it got installed as a heavy forwarder., so it was looking for the next indexer.     I deleted outputs.conf,  restarted Splunk and it started working.
I did connect MySQL to Splunk using DBConnect but not on the Universal Forwarder, i do not know how I can connect a DB on UF. Also, I am still figuring out how I can send the audit logs for connected... See more...
I did connect MySQL to Splunk using DBConnect but not on the Universal Forwarder, i do not know how I can connect a DB on UF. Also, I am still figuring out how I can send the audit logs for connected DB using Universal Forwarder
We have different lookup inputs into the Splunk ES asset list framework. Some values for assets change over time, for example due to DHCP og DNS renaming. When an asset gets a new IP due to e.g. DHCP... See more...
We have different lookup inputs into the Splunk ES asset list framework. Some values for assets change over time, for example due to DHCP og DNS renaming. When an asset gets a new IP due to e.g. DHCP, the lookup used as input into the asset framework is updated accordingly, but the merged asset lookup "asset_lookup_by_str" will contain both the new and the old IP. So the new IP is appended on the asset, it's not replacing the old IP. Due to "merge magic" that runs under the hood in the asset framework, over time this creates strange assets with many DNS names and many IPs. My question is, how long are asset list field values stored in the Splunk ES asset list framework? Are there any hidden values that keep track of say an IP, and will Splunk eventually remove the IP from the asset in the merged list? Or will the IP stay there forever, and these "multivalue assets" will thus just grow with more and more DNS names and IPs until the mv field limits are reached? And, if I reduce the asset list mv field limits, how does Splunk prioritize what values will be included or not? Does the values already on the merged list have priority, or does any new values have priority? Tried looking for answers in the documentation but could not find answers on my questions there. Hoping someone will share some insights here. Thanks!
Hello, it was not confirmed previously, but it appeared unlikely at the time. Previously, the issue persisted even after I changed the schedule from 2 7 * * * to 2,27 7 * * * and later even 2 7,19 *... See more...
Hello, it was not confirmed previously, but it appeared unlikely at the time. Previously, the issue persisted even after I changed the schedule from 2 7 * * * to 2,27 7 * * * and later even 2 7,19 * * * which required UF restarts at different times of day. While time sync does occur it doesn't occur often enough to have affected all of these attempts. Today, I double checked one of the systems more consistently affected (index=<WindowsLogs> host=<REDACT> EventCode=4616 4616 NewTime) and found a time synchronization did not occur around the time the issue manifested especially at the time of a UF service restart.
I have setup splunk, the machine has 15:26 as local time, but when I check splunkd.log time it is 20:26.   why is there a difference in time b/w local time and splunkd.log time?
You have too many searches try to run at the same time.  That means some searches have to wait (are delayed) until a search slot becomes available.  Use the Scheduled Searches dashboard in the Cloud ... See more...
You have too many searches try to run at the same time.  That means some searches have to wait (are delayed) until a search slot becomes available.  Use the Scheduled Searches dashboard in the Cloud Monitoring Console to see which times have the most delays and reschedule some of the searches that run at those times.
To refer to a field in an event, use single quotes around the field name.  Dollar signs refer to tokens, which are not part of an event. | `filter_maintenance_services('fields.ServiceID')`
Hi, I am a rookie in SPL and I have this general correlation search for application events: index="foo" sourcetype="bar" (fields.A="something" "fields.B"="something else") If this was a applicatio... See more...
Hi, I am a rookie in SPL and I have this general correlation search for application events: index="foo" sourcetype="bar" (fields.A="something" "fields.B"="something else") If this was a application specific search I could just specify the service in the search. But what I want to achieve is to use a service id from event rather than a fixed value to suppress results for that specific service. If I append  | `filter_maintenance_services("e5095542-9132-402f-8f17-242b83710b66")` to the search it works but if I use the event data service id it does not. Ex.  | `filter_maintenance_services($fields.ServiceID$)` I suspect that it has to do with  fields.ServiceID not being populated when the filter is deployed. How can get this to work?  
our Splunk received logs from Vmware workspace one (mobile device management (MDM)) as syslog messages. what is the source type  needed to be configured in inputs.conf or is there any addon to assis... See more...
our Splunk received logs from Vmware workspace one (mobile device management (MDM)) as syslog messages. what is the source type  needed to be configured in inputs.conf or is there any addon to assist In parsing? 
Thanks both of you - both work :-0)
Hi Hi Team, I am getting the below error message on my splunk ES search head. Is there any troubleshooting I can perform on the splunk web to correct this. Please help. PS. I don't have access to ... See more...
Hi Hi Team, I am getting the below error message on my splunk ES search head. Is there any troubleshooting I can perform on the splunk web to correct this. Please help. PS. I don't have access to the backend.  
Thx Giuseppe!
Thank you. I will use it as a reference. 
The upside to the Splunk-supported add-ons is that they have decent documentation. In this case it's https://splunk.github.io/splunk-add-on-for-palo-alto-networks/
Dynamic Alert recipient for test in detector mainly using custom properties in alert recipients tab in detectors. unable to crack that!
I've been in touch with support, this is a known issue and there's no plan to fix. There is a workaround that can be used:   | map [search index=_internal [| makeresults | eval earliest=$earliest$... See more...
I've been in touch with support, this is a known issue and there's no plan to fix. There is a workaround that can be used:   | map [search index=_internal [| makeresults | eval earliest=$earliest$, latest=$latest$ | return earliest, latest]     It's a bit longer and needs another subsearch, but can be easier than escaping everything.   Thanks everyone for their input @PickleRick @richgalloway 
Thank you for your reply. I will choose the Splunk-supported add-on.
Hi @tscroggins ,  Thanks for your reply, then do you perhaps know if they're any time-range args that work with input-dashboard ? Otherwise, should i use another method ?
Hi , I am facing the same issue and found this thread. Was the issue resolved ? Can you let me know the fix please if this is working for you now. Thanks
Hello All, Has anyone encountered a situation like this before? Thanks!