All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, thanks for the reply, But meanwhile I found another solution, I will try this solution next to see if it works.  
i am using below to load colur in drop downlist . Data loading propertly. but it always shows - Could not create search - No Search query provided   <input type="dropdown" token="color" depends="$c... See more...
i am using below to load colur in drop downlist . Data loading propertly. but it always shows - Could not create search - No Search query provided   <input type="dropdown" token="color" depends="$color_dropdown_token$" searchWhenChanged="false"> <label>Color</label> <choice value="*">All</choice> <choice value="Green">Green</choice> <choice value="Orange">Orange</choice> <choice value="Red">Red</choice> <initialValue>*</initialValue> <search> <query/> <earliest>$Time.earliest$</earliest> <latest>$Time.latest$</latest> </search> </input>      
As I explained earlier, you don't need to just look back further and further. The "issue" is to do with indexing lag. Whenever that lag spans a report time period boundary, you have the potential for... See more...
As I explained earlier, you don't need to just look back further and further. The "issue" is to do with indexing lag. Whenever that lag spans a report time period boundary, you have the potential for missed events. To mitigate this, you could use overlapping time periods, and use some sort of deduplication scheme, such as a summary index, if you want to avoid multiple alerts for the same event.
Thanks for your answer KothariSurbhi After some debugging Ive discovered that Splunk pulled logs again from many buckets from all kinds of different dates on February 23rd. It seems that logs who h... See more...
Thanks for your answer KothariSurbhi After some debugging Ive discovered that Splunk pulled logs again from many buckets from all kinds of different dates on February 23rd. It seems that logs who had already entered Splunk in 2023 entered again on February 23, 2024 for a reason that is still unclear. Nothing happened on the AWS side and the s3 buckets looks perfectly fine.    
I will try to search in the last 60 min by doing a throttle of the incidentId
Hello @matoulas Can you please elaborate the question? One liner isn't seems to define the actual problem
Hello @alexspunkshell, below search should give you list of all CIM Indexes Macro Definition -  | rest /servicesNS/-/-/admin/macros count=0 splunk_server=local | search title=cim*indexes | table tit... See more...
Hello @alexspunkshell, below search should give you list of all CIM Indexes Macro Definition -  | rest /servicesNS/-/-/admin/macros count=0 splunk_server=local | search title=cim*indexes | table title definition   Please accept the solution and hit Karma, if this helps! 
If your report runs every 15 minutes looking back 15 minutes, there will be boundary conditions where the event has a timestamp in the 15 minutes prior to the reported one, which didn't get indexed u... See more...
If your report runs every 15 minutes looking back 15 minutes, there will be boundary conditions where the event has a timestamp in the 15 minutes prior to the reported one, which didn't get indexed until this time period and therefore is missed
Timechart will be filling in the empty time slots with zeroes. Given that you have an error, I suspect that this part of the process hasn't been reached before the error, which is why these are missi... See more...
Timechart will be filling in the empty time slots with zeroes. Given that you have an error, I suspect that this part of the process hasn't been reached before the error, which is why these are missing from your final result.
You need to investigate your settings for your report and determine why you are missing alerts.
Initially I was facing error. I passed token inside the quotes "$token$". That worked for me. Thanks a lot.
Have a nice day! I have several Splunk instances and often see the message below:   WorkloadsHandler [111560 TcpChannelThread] - Workload mgmt is not supported on this system.   I know that the ... See more...
Have a nice day! I have several Splunk instances and often see the message below:   WorkloadsHandler [111560 TcpChannelThread] - Workload mgmt is not supported on this system.   I know that the workload feature is not supported on the windows system, and it is obviously disabled How can I get rid of this annoying message in the splunkd.log?
Below are the CIM Macros where i am using and there are different indexes mapped in individual macros. I want to get the list of all indexes mapped in all the CIM Macros. Hence i did a scheduled se... See more...
Below are the CIM Macros where i am using and there are different indexes mapped in individual macros. I want to get the list of all indexes mapped in all the CIM Macros. Hence i did a scheduled search which runs and check all the macros. But it is utilizing lot of memory and even  searches are failing. Please help me with a better way to get the list of all indexes mapped in CIM Macros.   cim_Authentication_indexes cim_Alerts_indexes cim_Change_indexes cim_Endpoint_indexes cim_Intrusion_Detection_indexes cim_Malware_indexes cim_Network_Resolution_indexes cim_Network_Sessions_indexes cim_Network_Traffic_indexes cim_Vulnerabilities_indexes cim_Web_indexes    
and what can be the problem when the difference is 4-5 min between the indexing time and the _time, and the alert runs every 15 min and looks at the last 15 min.
@ITWhisperer Can I please get you guidance.
Yes you  understand correctly, I have two different log types ABC and EFG in the same index, but the sourcetype is different in both logs so the condition is when there will be error it will be calcu... See more...
Yes you  understand correctly, I have two different log types ABC and EFG in the same index, but the sourcetype is different in both logs so the condition is when there will be error it will be calculated from the ABC log but the details which it is containing it is in EFG log that is in other sourcetype and I will also fetch the details of that log but what I want is when I got total error is ABC is 5 then when I should search the ABC and EFG together it should show me 5 errors only related to the correlationid. I hope you understand my query from this .
Thanks, I've tried that but still didn't get the "null" values. I do get an error which says - "The specified span would result in too many (>175000) rows." I get this error a lot during this searc... See more...
Thanks, I've tried that but still didn't get the "null" values. I do get an error which says - "The specified span would result in too many (>175000) rows." I get this error a lot during this search but i don't understand why would the null values only be missing? Additionally - does this error necessarily mean that search has failed or stopped at the limit?
Hi @Ryan.Paredez , thanks for the heads up. I'll keep an eye on the thread. cheers, Jerg
As per my investigation the minute that was alerted on, there are "svc_radius_probe_ctx" events occurred. what type of changes I should make to resolve it ? should I increase the time ? or is there a... See more...
As per my investigation the minute that was alerted on, there are "svc_radius_probe_ctx" events occurred. what type of changes I should make to resolve it ? should I increase the time ? or is there anything need to be done from RSA console ? 
Are you saying that you are getting false alerts i.e. when you look back at the minute that was alerted on, there are "svc_radius_probe_ctx" events? If so, this could be that they have not been inde... See more...
Are you saying that you are getting false alerts i.e. when you look back at the minute that was alerted on, there are "svc_radius_probe_ctx" events? If so, this could be that they have not been indexed by the time the alert report is executed, i.e. you have not left enough time between the event happening, and it being sent to Splunk, and it being indexed. There is (nearly) always a lag between the event time (_time) and the index time (_indextime), and your alert report schedule and time period should take this into account.