All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hmm i might be doing something wrong still as i get the timechart but the results are all zeros and there should be a couple at least above zero  
@dc18 - If you are on Splunk Cloud try Data Manager - https://docs.splunk.com/Documentation/DM/1.8.3/User/AWSAbout , see if it can help.   If not Splunk Add-on for AWS would be your best bet.   I... See more...
@dc18 - If you are on Splunk Cloud try Data Manager - https://docs.splunk.com/Documentation/DM/1.8.3/User/AWSAbout , see if it can help.   If not Splunk Add-on for AWS would be your best bet.   I hope this helps!!
@rob_gibson - You need to filter on the source which is generating the data. And not send data at all to Splunk HEC.   Alternatively, You can install Splunk HF locally on the service. Create Spl... See more...
@rob_gibson - You need to filter on the source which is generating the data. And not send data at all to Splunk HEC.   Alternatively, You can install Splunk HF locally on the service. Create Splunk HEC input locally on Splunk HF. Update your data source to send data to local Splunk HF HEC instead of Splunk Indexers. You must use services/collector/raw endpoint of Splunk HEC for data filtering to work. https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector  Use nullQueue with regex to filter data on from going to Splunk. https://community.splunk.com/t5/Getting-Data-In/Filtering-events-using-NullQueue/m-p/66392  Forward data from Splunk HF to Splunk Indexers.   I would recommend not sending data to Splunk HEC at all directly by Data source would be simple solution. I hope this helps!!! 
When you keep hitting the size limit, Splunk will roll the buckets to frozen. That's the point. Some things worth verifying: 1) How did you increase the size limit? Which parameters did you edit an... See more...
When you keep hitting the size limit, Splunk will roll the buckets to frozen. That's the point. Some things worth verifying: 1) How did you increase the size limit? Which parameters did you edit and did you restart your splunkd? 2) How do you know buckets are frozen due to size limit? 3) Do you have volume size limits?
Sorry - learning a few things as I go here. Basically, I just need to compare the results of a search to a static known list of values. The search will return a list of values using stats. stats v... See more...
Sorry - learning a few things as I go here. Basically, I just need to compare the results of a search to a static known list of values. The search will return a list of values using stats. stats values(actualResults) as actualResults I guess I'm not 100% clear on what to do first to create the static list using makeresults, and then to append/use stats to combine - I have attempted to do so without getting the results I expect. If I were to put it in SQL terms, I'd have a reference table of known values ("My Item 1", "My Item 2", etc.) and a results table of data to search, and I'd do a left outer join: Ref Table: MY_REF_TABLE KNOWN_ITEM My Item 1 My Item 2 My Item 3 My Item 4   Results Table: MY_RESULTS_TABLE RESULT_ITEM My Item 1 My Item 3   Query: select KNOWN_ITEM, case when result_item is null then 'No Match' else 'Match' end HasMatch from MY_REF_TABLE left join MY_RESULTS_TABLE on KNOWN_ITEM= RESULT_ITEM Results: KNOWN_ITEM HASMATCH My Item 1 Match My Item 2 No Match My Item 3 Match My Item 4 No Match
The field name ("attribute") for index is "index".
@marnall Is this something that I can reach to Splunk support? Do they provide support on these type of queries? 
The query returns no results because the timechart command requires the _time field, but that field was removed by the stats command on line 2. The fix is to include _time in the stats command, like... See more...
The query returns no results because the timechart command requires the _time field, but that field was removed by the stats command on line 2. The fix is to include _time in the stats command, like this index=OMITTED source=OMITTED host="SERVER1" OR host="SERVER2" | bin span=1d _time | stats max(Value) as Value by host, _time | eventstats ... | timechart span=1d avg(value_percentage_difference)  Adjust the span option in the bin and timechart commands to preference.  Make sure they match.
Hi @Roberto.Barnes, I did some digging around and found someone who discovered a workaround. The workaround that has solved this temporarily by adding the following java option: --add-reads jdk.... See more...
Hi @Roberto.Barnes, I did some digging around and found someone who discovered a workaround. The workaround that has solved this temporarily by adding the following java option: --add-reads jdk.jfr=ALL-UNNAMED If the community does not jump in and provide any insight, you can also contact Cisco AppDyanmics Support How do I submit a Support ticket? An FAQ 
@Markfill - Please describe what do you mean by index rolling (I assume, you mean bucket rolling and not index rolling.) * Warm Bucket to Cold Bucket? OR  * Cold Bucket to Frozen Bucket or being d... See more...
@Markfill - Please describe what do you mean by index rolling (I assume, you mean bucket rolling and not index rolling.) * Warm Bucket to Cold Bucket? OR  * Cold Bucket to Frozen Bucket or being deleted?
I am using the below query (server names replaced) to find when there is a greater than 50% difference in volume between 2 call routers (servers). For some reason im getting no timechart results, eve... See more...
I am using the below query (server names replaced) to find when there is a greater than 50% difference in volume between 2 call routers (servers). For some reason im getting no timechart results, even when setting the difference to 1% which should always return results. index=OMITTED source=OMITTED host="SERVER1" OR host="SERVER2" | stats max(Value) as Value by host | eventstats max(if(host='SERVER1', Value, null)) as server1_value max(if(host='SERVER2', Value, null)) as server2_value | eval value_difference = abs(server1_value - server2_value) | eval value_percentage_difference = if(coalesce(server1_value, server2_value) != 0, (value_difference / coalesce(server1_value, server2_value) * 100), 0) | where value_percentage_difference > 1 | timechart avg(value_percentage_difference)
do we have splunk attribute to fetch index  we are passing index in splunk query. with only log file do we have any splunk attribute to fetch index??? index = aaa index = bbb like we have for hos... See more...
do we have splunk attribute to fetch index  we are passing index in splunk query. with only log file do we have any splunk attribute to fetch index??? index = aaa index = bbb like we have for host index=aaa(source="/var/log/tes1.log" |stats count by host  
in QA and PROd i have 3 servers test - testhost qa - qahost1,qahost2,qahost3 prod - prodhost1,prodhost2,prodhost3 and my query would be for qa if i choose qa from dropdownlist   index=aaa... See more...
in QA and PROd i have 3 servers test - testhost qa - qahost1,qahost2,qahost3 prod - prodhost1,prodhost2,prodhost3 and my query would be for qa if i choose qa from dropdownlist   index=aaa(source="/var/log/tes1.log" (host=qahost1) OR (host=qahost2,) OR (host=qahost3) )   can you please help me integrate above one with below query? index=aaa source="/var/log/test1.log" |stats count by host | eval category=case(match(host, "t"), "Test", match(host, "q"), "QA", match(host, "p"), "Prod", true(), "Unknown")
these were dummy numbers, apologies.  percent change of the average This is what i am looking for: Date                                       S0100D                    S0400D Friday       2024-04-... See more...
these were dummy numbers, apologies.  percent change of the average This is what i am looking for: Date                                       S0100D                    S0400D Friday       2024-04-11    200 (50%)               250 (25%) Saturday 2024-04-11    600 (50%)               1750  (75%) AVG                                        400                             1000
Still facing the same issue. It is intermittent sometime it is working and sometime not.I should get Total_GB1 and total of all column but instead i am getting 18 in place of Total_GB1 and penultimat... See more...
Still facing the same issue. It is intermittent sometime it is working and sometime not.I should get Total_GB1 and total of all column but instead i am getting 18 in place of Total_GB1 and penultimate value is getting printed in 18.
Right now! What is the best visualization to plot such multi data sources? It should illustrate the response codes from each back-end service as the time changes. 
Hi you said “due to a recent release”. What is this? A new splunk version, a new software realease of your business app or something else?  r. Ismo
index keeps rolling of data due to size even after size has been increased. Is there another way to resolve this issue?
Hi something like TIME_PREFIX=^\d+:\d+:\d+:\d+: TIME_FORMAT=%Y/%m/%d %H:%M:%S.%2Q r. Ismo 
I have a cloud-based server sending events to the Indexer over my WAN link via Http Event Collector (HEC).  We have limited bandwidth on the WAN link.  I want to limit (blacklist) a number of event c... See more...
I have a cloud-based server sending events to the Indexer over my WAN link via Http Event Collector (HEC).  We have limited bandwidth on the WAN link.  I want to limit (blacklist) a number of event codes and reduce the transfer of log data over the WAN. Q:  Does a blacklist on inputs.conf for the HEC filter the events at the indexer, or does it stop those event from being transferred at the source? Q: If I install a Universal Forwarder, am I able to stop the blacklisted events from being sent across the WAN?