All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Unfortunately not, as this app is "Not Supported" (as seen on the splunkbase page), so Splunk support can't help you with fixing the app. If you are using Splunk cloud and would like assistance with... See more...
Unfortunately not, as this app is "Not Supported" (as seen on the splunkbase page), so Splunk support can't help you with fixing the app. If you are using Splunk cloud and would like assistance with managing apps on Splunk Cloud, then Splunk support can probably help with getting the app onto your cloud instance.
Here's my configuration [mack] repFactor=auto coldPath = volume:cold/customer/mack/colddb homePath = volume:hot_warm/customer/mack/db thawedPath = /splunk/data/cold/customer/mack/thaweddb fro... See more...
Here's my configuration [mack] repFactor=auto coldPath = volume:cold/customer/mack/colddb homePath = volume:hot_warm/customer/mack/db thawedPath = /splunk/data/cold/customer/mack/thaweddb frozenTimePeriodInSecs = 34186680 maxHotBuckets = 10 maxTotalDataSizeMB = 400000 so instead of data rolling to cold, it rolls off
what is the error in the below query which i am using to populate in drop down list index=aaa(source="/var/log/testd.log") |stats count by host | eval env=case(match(host, "*10qe*"), "Test", match(h... See more...
what is the error in the below query which i am using to populate in drop down list index=aaa(source="/var/log/testd.log") |stats count by host | eval env=case(match(host, "*10qe*"), "Test", match(host, "*10qe*"), "QA", match(host, "*10qe*"), "Prod" )  
The configuration elements work where they are defined (but they may have additional impact on other functionalities due to mutual dependency - for example lowering output bandwidth on forwarder can ... See more...
The configuration elements work where they are defined (but they may have additional impact on other functionalities due to mutual dependency - for example lowering output bandwidth on forwarder can affect rate of input on some inputs (you can't slow down inputs working in "push" mode - you can just drop events if the the queue is full). So if you were to configure your HEC input to blacklist something, that would be working on the HEC input, not on other components. Having said that - what do you mean by blacklisting on HEC input? I don't recall any setting regarding http input filtering/blacklisting events. The closest thing to any filtering on HEC input would be the list of SANs allowed to connect and that's it. Even if you wanted to filter on the source forwarder, remember that filtering applies only to specific types of inputs - windows eventlog inputs can filter and ingest only some events and file monitor inputs can filter and ingest only certain files (still no event-level filtering). Maybe you could implement some form of filtering on the UF if you enabled additional processing on the UF itself but that's not very well documented (hardly documented at all if I were to be honest) and turning on this option is not recommended. So if you wanted to filter events before sending them downstream, you'd most probably need a HF which would do the parsing locally, fitler some of them out and then send across your WAN link but here we have two issues: 1) While it is called "http output", the forwarder doesn't use "normal" HEC to send events downstream but uses s2s tunelled over http connection. It's a completely different protocol. 2) HF parses data locally and sends the data parsed, not just cooked. That unfortunately means that it sends a whole lot of data more than UF normally sends as it sends data cooked. So "limiting" your bandwidth usage by installing a HF and filtering the data before sending might actually have the opposite effect because even though you might be sending less events (because some have been filtered out) you might actually be sending more data altogether (because you're sending parsed data instead of just cooked stream). Depending on the data you want to ingest, you might consider other options on the source side - if the events come from syslog sources you could set up a syslog receiver filtering data before passing them to Splunk, if you have files, you could preprocess them by external script. And so on.
i tried below: but it didnt return anything (source="/var/ltest/test.log") |table index    
"You must use services/collector/raw endpoint of Splunk HEC for data filtering to work." This is not entirely true. In fact it's not true at all But seriously, while the /event endpoint does ski... See more...
"You must use services/collector/raw endpoint of Splunk HEC for data filtering to work." This is not entirely true. In fact it's not true at all But seriously, while the /event endpoint does skip some parts of the ingestion queue and you can't affect line breaking or timestamp recognition (with exceptions) this way, your normal routing and filtering by means of transforms modifying _TCP_ROUTE or queue works perfectly ok.  
hi everybody. I have three Splunk instances in three docker containers on the same subnet . I have mapped port 8089 on port 8099 on each container. No firewalls between them. I checked the route fro... See more...
hi everybody. I have three Splunk instances in three docker containers on the same subnet . I have mapped port 8089 on port 8099 on each container. No firewalls between them. I checked the route from/to all containers (via port 8099) and there are no blocks and no issues... But when i try to add on of the containers splunk as a search peer in a distributed search deployment, i always receive the error "Error while sending public key to search peer" Any suggestion about this? Thank to everybody in advance.
hmm i might be doing something wrong still as i get the timechart but the results are all zeros and there should be a couple at least above zero  
@dc18 - If you are on Splunk Cloud try Data Manager - https://docs.splunk.com/Documentation/DM/1.8.3/User/AWSAbout , see if it can help.   If not Splunk Add-on for AWS would be your best bet.   I... See more...
@dc18 - If you are on Splunk Cloud try Data Manager - https://docs.splunk.com/Documentation/DM/1.8.3/User/AWSAbout , see if it can help.   If not Splunk Add-on for AWS would be your best bet.   I hope this helps!!
@rob_gibson - You need to filter on the source which is generating the data. And not send data at all to Splunk HEC.   Alternatively, You can install Splunk HF locally on the service. Create Spl... See more...
@rob_gibson - You need to filter on the source which is generating the data. And not send data at all to Splunk HEC.   Alternatively, You can install Splunk HF locally on the service. Create Splunk HEC input locally on Splunk HF. Update your data source to send data to local Splunk HF HEC instead of Splunk Indexers. You must use services/collector/raw endpoint of Splunk HEC for data filtering to work. https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector  Use nullQueue with regex to filter data on from going to Splunk. https://community.splunk.com/t5/Getting-Data-In/Filtering-events-using-NullQueue/m-p/66392  Forward data from Splunk HF to Splunk Indexers.   I would recommend not sending data to Splunk HEC at all directly by Data source would be simple solution. I hope this helps!!! 
When you keep hitting the size limit, Splunk will roll the buckets to frozen. That's the point. Some things worth verifying: 1) How did you increase the size limit? Which parameters did you edit an... See more...
When you keep hitting the size limit, Splunk will roll the buckets to frozen. That's the point. Some things worth verifying: 1) How did you increase the size limit? Which parameters did you edit and did you restart your splunkd? 2) How do you know buckets are frozen due to size limit? 3) Do you have volume size limits?
Sorry - learning a few things as I go here. Basically, I just need to compare the results of a search to a static known list of values. The search will return a list of values using stats. stats v... See more...
Sorry - learning a few things as I go here. Basically, I just need to compare the results of a search to a static known list of values. The search will return a list of values using stats. stats values(actualResults) as actualResults I guess I'm not 100% clear on what to do first to create the static list using makeresults, and then to append/use stats to combine - I have attempted to do so without getting the results I expect. If I were to put it in SQL terms, I'd have a reference table of known values ("My Item 1", "My Item 2", etc.) and a results table of data to search, and I'd do a left outer join: Ref Table: MY_REF_TABLE KNOWN_ITEM My Item 1 My Item 2 My Item 3 My Item 4   Results Table: MY_RESULTS_TABLE RESULT_ITEM My Item 1 My Item 3   Query: select KNOWN_ITEM, case when result_item is null then 'No Match' else 'Match' end HasMatch from MY_REF_TABLE left join MY_RESULTS_TABLE on KNOWN_ITEM= RESULT_ITEM Results: KNOWN_ITEM HASMATCH My Item 1 Match My Item 2 No Match My Item 3 Match My Item 4 No Match
The field name ("attribute") for index is "index".
@marnall Is this something that I can reach to Splunk support? Do they provide support on these type of queries? 
The query returns no results because the timechart command requires the _time field, but that field was removed by the stats command on line 2. The fix is to include _time in the stats command, like... See more...
The query returns no results because the timechart command requires the _time field, but that field was removed by the stats command on line 2. The fix is to include _time in the stats command, like this index=OMITTED source=OMITTED host="SERVER1" OR host="SERVER2" | bin span=1d _time | stats max(Value) as Value by host, _time | eventstats ... | timechart span=1d avg(value_percentage_difference)  Adjust the span option in the bin and timechart commands to preference.  Make sure they match.
Hi @Roberto.Barnes, I did some digging around and found someone who discovered a workaround. The workaround that has solved this temporarily by adding the following java option: --add-reads jdk.... See more...
Hi @Roberto.Barnes, I did some digging around and found someone who discovered a workaround. The workaround that has solved this temporarily by adding the following java option: --add-reads jdk.jfr=ALL-UNNAMED If the community does not jump in and provide any insight, you can also contact Cisco AppDyanmics Support How do I submit a Support ticket? An FAQ 
@Markfill - Please describe what do you mean by index rolling (I assume, you mean bucket rolling and not index rolling.) * Warm Bucket to Cold Bucket? OR  * Cold Bucket to Frozen Bucket or being d... See more...
@Markfill - Please describe what do you mean by index rolling (I assume, you mean bucket rolling and not index rolling.) * Warm Bucket to Cold Bucket? OR  * Cold Bucket to Frozen Bucket or being deleted?
I am using the below query (server names replaced) to find when there is a greater than 50% difference in volume between 2 call routers (servers). For some reason im getting no timechart results, eve... See more...
I am using the below query (server names replaced) to find when there is a greater than 50% difference in volume between 2 call routers (servers). For some reason im getting no timechart results, even when setting the difference to 1% which should always return results. index=OMITTED source=OMITTED host="SERVER1" OR host="SERVER2" | stats max(Value) as Value by host | eventstats max(if(host='SERVER1', Value, null)) as server1_value max(if(host='SERVER2', Value, null)) as server2_value | eval value_difference = abs(server1_value - server2_value) | eval value_percentage_difference = if(coalesce(server1_value, server2_value) != 0, (value_difference / coalesce(server1_value, server2_value) * 100), 0) | where value_percentage_difference > 1 | timechart avg(value_percentage_difference)
do we have splunk attribute to fetch index  we are passing index in splunk query. with only log file do we have any splunk attribute to fetch index??? index = aaa index = bbb like we have for hos... See more...
do we have splunk attribute to fetch index  we are passing index in splunk query. with only log file do we have any splunk attribute to fetch index??? index = aaa index = bbb like we have for host index=aaa(source="/var/log/tes1.log" |stats count by host  
in QA and PROd i have 3 servers test - testhost qa - qahost1,qahost2,qahost3 prod - prodhost1,prodhost2,prodhost3 and my query would be for qa if i choose qa from dropdownlist   index=aaa... See more...
in QA and PROd i have 3 servers test - testhost qa - qahost1,qahost2,qahost3 prod - prodhost1,prodhost2,prodhost3 and my query would be for qa if i choose qa from dropdownlist   index=aaa(source="/var/log/tes1.log" (host=qahost1) OR (host=qahost2,) OR (host=qahost3) )   can you please help me integrate above one with below query? index=aaa source="/var/log/test1.log" |stats count by host | eval category=case(match(host, "t"), "Test", match(host, "q"), "QA", match(host, "p"), "Prod", true(), "Unknown")