All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes, because KV store is not a time-series DB like the Splunk index effectively is. A KV store has no fixed _time field like there is for every event in a Splunk index - you define the fields in you... See more...
Yes, because KV store is not a time-series DB like the Splunk index effectively is. A KV store has no fixed _time field like there is for every event in a Splunk index - you define the fields in your collection, so you need to control what gets filtered. If you have a field called KV_entry_time, which is stored as an epoch, then you will need to convert your time picker selection to epoch start/end values and then  |inputlookup {collection_name} where KV_entry_time >= $time_picker_start$ AND KV_entry_time < $time_picker_end$  There is a trick to converting the time picker input to a start/end epoch value - you need a background search in the XML like this <search> <query> | makeresults | addinfo </query> <done> <set token="time_picker_start">$result.info_min_time$</set> <set token="time_picker_end">$result.info_max_time$</set> </done> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> </search> which will use addinfo to get the time picker's epoch values from info_*_time and then the token setting will convert those to the time_picker_* tokens you can use in the collection search.  Hope this helps  
And same in 2024.  Thank you!
This statement   | stats values(index) as index by InstanceId   should certainly give you a field called index which will contain main/other or both Doing  | stats values(*) as * dc(index) as i... See more...
This statement   | stats values(index) as index by InstanceId   should certainly give you a field called index which will contain main/other or both Doing  | stats values(*) as * dc(index) as index_count by InstanceId would give you all the values of every field from both indexes and a field called index_count that would contain a 1 or 2 You can't match the resource id against the instanceid as the events are not yet "joined" together, so there will either be a ResourceId (from index=main) OR an InstanceId (from index=other), so the coalesce+stats will join the two datasets together on that now common field (due to coalesce). Effectively what you are saying is that after the stats, it will show, for each InstanceId (where InstanceId has come from ResourceId in index=main), the values of the indexes those IDs were found in. After the stats you can then match as needed, so I believe what you are trying to do is to then say  "I need to only show results, where a ResourceId from index=main has also been found as InstanceId from index=other. So, the logic to decide that is mvcount(index)=2 (this means it was in both indexes). You could use index_count from the dc(index) example above = that is the same as doing the mvcount. Doing values(*) as * is simply a way to carry through all fields combined from both indexes when joining the data together - as you have tried the stats values(index) as index... that should simply carry forward the main+other to that field. Can you given an example of the data you have in both and a search result that highlights what you are getting.
Thank you for your kind response, I am getting 10 detections if there are10 rows in the result But the average time to detect should be an average of all the time differences from 1 alert mean time. ... See more...
Thank you for your kind response, I am getting 10 detections if there are10 rows in the result But the average time to detect should be an average of all the time differences from 1 alert mean time.  Please find the attached screenshot for more information.  Splunk alert splunk_attack_1 triggered 2 times, i want to take the avg of time and display only one result with difference.  Sample result  _time search_name    event time Hour at Source Mean Time to Detect 2/5/2024 19:47:10       Splunk_Attack_1 2/5/2024 17:47:10       2 Hr 3 Min 19 Secs.000000   2/5/2024 19:20:10       Splunk_Attack_1 2/5/2024 17:20:10       2 Hr 7 Min 18 Secs.000000   2/5/2024 19:30:35       Splunk_Attack_2 2/5/2024 18:30:35       1 Hr 37 Min 12 Secs.000000   2/5/2024 18:20:15       Splunk_Attack_2 2/5/2024 18:20:15       1 Hr 26 Min 15 Secs.000000   2/6/2024 18:05:15       Splunk_Attack_2 2/6/2024 18:05:15       1 Hr 26 Min 15 Secs.000000   2/7/2024 16:55:15       Splunk_Attack_3 2/7/2024 14:55:15       2 Hr 0 Min 18 Secs.000000   2/8/2024 16:35:15       Splunk_Attack_3 2/8/2024 14:35:15       2 Hr 20 Min 18 Secs.000000   2/9/2024 16:10:15       Splunk_Attack_3 2/9/2024 14:10:15       2 Hr 40 Min 18 Secs.000000    Expected Result  _time search_name    event time Hour at Source Mean Time to Detect 2/5/2024 19:47:10       Splunk_Attack_1 2/5/2024 17:47:10       2 Hr 3 Min 19 Secs.000000   2/5/2024 19:20:10       Splunk_Attack_2 2/5/2024 17:20:10       2 Hr 7 Min 18 Secs.000000   2/5/2024 19:30:35       Splunk_Attack_3 2/5/2024 18:30:35       1 Hr 37 Min 12 Secs.000000    
Eighty transactions of up to an hour is a new requirement that my previous suggestion will not handle.  The transaction command is pretty inefficient and will become less so when it has to track man... See more...
Eighty transactions of up to an hour is a new requirement that my previous suggestion will not handle.  The transaction command is pretty inefficient and will become less so when it has to track many transactions over a long time range. Rather than help you with a specific, sub-optimal solution, let's see if there's another solution to the problem.  What problem are you trying to solve?
For the connection to be secured, the name of the host must match the name in the certificate. So if you're connecting to FQDN, your cert must contain the FQDN. If you want just the hostname, you mus... See more...
For the connection to be secured, the name of the host must match the name in the certificate. So if you're connecting to FQDN, your cert must contain the FQDN. If you want just the hostname, you must have the hostname. If you only have FQDN in the cert and connect to just hostname, you'll get an alert. Same goes for IP. As a side note it's quite typical for CAs to be reluctant to issue certs for IPs.
https://docs.splunk.com/Documentation/Splunk/9.2.0/Alert/EmailNotificationTokens Here you have what tokens you can use. I assume you want the same saved search config on both environments so result-... See more...
https://docs.splunk.com/Documentation/Splunk/9.2.0/Alert/EmailNotificationTokens Here you have what tokens you can use. I assume you want the same saved search config on both environments so result-based tokens are also a no-no. So you're limited to $server.serverName$ I think
HFs can log their OS/server just the same as a UF can.  Use the same TAs as on UFs - don't just copy inputs.conf files.
1. We can't answer sales question. Only sales people can do that reliably. Obviously sales questions answer will depend heavily on the size of what you'd be talking about. And quite frankly, 6GB/day ... See more...
1. We can't answer sales question. Only sales people can do that reliably. Obviously sales questions answer will depend heavily on the size of what you'd be talking about. And quite frankly, 6GB/day is not a very big license. 2. Even if they were willing to split, remember that even Splunk Free is relatively "big" compared to what you're trying to get. 3. What might be most reasonable is to install your license on a license manager, split your license into separate stacks and connect your license peers to that LM so that you manage all your indexers from a single entitlement.
Putting maxspan option does work for the one particular event where the start/stop events happen at the same time.   The next issue that comes up is that there are around 80 "transactions" that I am... See more...
Putting maxspan option does work for the one particular event where the start/stop events happen at the same time.   The next issue that comes up is that there are around 80 "transactions" that I am monitoring that can have a duration of over an hour. The only way I can think of making this work is to have two different transaction creation lines that are inside of a case statement?   One with the maxspan and one without depending upon a job name that I am extracting earlier in my code... Is that possible or do you have any other ideas/suggestions ?
It's possible the policy has changed over the years.  Again, the account team should be your source for answers to license questions.  Specifically ask them if Support will further divide the license.
I recently received CA Certificates from my Organization´s PKI Team. In CSR, I provided Server Hostname in CN and SAN and hence when I am accessing the GUI using hostname the connection is secure. ... See more...
I recently received CA Certificates from my Organization´s PKI Team. In CSR, I provided Server Hostname in CN and SAN and hence when I am accessing the GUI using hostname the connection is secure. But when I access it with IP, it is not secure. So, do I need to provide IP in SAN? Is there an alternate way, that the browser should only be accessible through hostname:8000 and not IP:8000   Please pour in your suggestions
Have you tried using the maxspan option to limit how far apart the startswith and endswith events can be? | transaction keeporphans=true host aJobName startswith=("START of script") endswith=("COMPL... See more...
Have you tried using the maxspan option to limit how far apart the startswith and endswith events can be? | transaction keeporphans=true host aJobName startswith=("START of script") endswith=("COMPLETED OK" OR "ABORTED, exiting with status") maxspan=0  
Is there a way to give a user read-only access to only a specific dashboard on Splunk ES such as the Executive Summary dashboard? Any assistance would be greatly appreciated!  *Edit Sorry we have t... See more...
Is there a way to give a user read-only access to only a specific dashboard on Splunk ES such as the Executive Summary dashboard? Any assistance would be greatly appreciated!  *Edit Sorry we have the user role and user created but we are unable to restrict it to a single dashboard, we can specify an app such as ES but have been unsuccessful in getting a default dashboard set. When you land on ES there is the "Security Posture"  "Incident Review" "App Configuration" etc settings. Would it be possible to change one of these from "Security Posture" to "Executive Summary" so that way they are just a click away from the appropriate dashboard? Thank you!
Thank you @richgalloway.  Our sales rep said that if we buy a 5GB/day or 6GB/day license, the smallest they (the sales department) will divide that is into 1GB/day license chunks. However, I know fo... See more...
Thank you @richgalloway.  Our sales rep said that if we buy a 5GB/day or 6GB/day license, the smallest they (the sales department) will divide that is into 1GB/day license chunks. However, I know for a fact that a Splunk support ticket can be submitted to divide an existing license into smaller than 1GB/day chunks.  So I'm just trying to reconcile these two different sources: does the sales department have a different set of rules they follow; and does the Splunk customer support have different rules for this? Has anything changed since I last divided a license many years ago?
I have Heavy Forwarders that are running on Windows and Linux servers that still need to be monitored. Are there best practices for what to and not to log from a Heavy Forwarder? For example, can I ... See more...
I have Heavy Forwarders that are running on Windows and Linux servers that still need to be monitored. Are there best practices for what to and not to log from a Heavy Forwarder? For example, can I take my default Windows inputs.conf file from my Universal Forwarders and apply it to my Heavy Forwarders or will this cause a "logging loop" where the Heavy Forwarder is logging itself logging? I am completely guessing but maybe I could copy over my UF inputs.conf file but disable the wineventlog:application logs? What would be the equivalent on a Linux HF?
Cisco does not own Splunk, yet. The community unlikely to have a better answer than Sales or fezzes.  Your account team should have the answer for you or be able to get it.
Apologies, didn't realized it got posted in "Getting Data in". Well, I have a data already in Splunk and trying to create a custom alert to trigger an email to DL, when the condition met. But I do... See more...
Apologies, didn't realized it got posted in "Getting Data in". Well, I have a data already in Splunk and trying to create a custom alert to trigger an email to DL, when the condition met. But I don't have an env field in either DEV & PROD data. When I create alert with subject DEV $name$. the admin team deploying the same code to PROD saying that they wanted to keep the same code across all env.  I'm getting the alert as "DEV myAlert" in PROD. So checking if there is a way to implement this just by including the token ?? 
used rex "Receiver_ID =(?<Receiver_ID>.+)\s TxnType" and worked
WebSphere Application Server 8.5 for z/OS SMF type record 120 support implementation