All Posts

Top

All Posts

Additional Resources   Free Trial Splunk observability    Documentation   Track service performance using dashboards in Splunk APM   Certification   Splunk Observa... See more...
Additional Resources   Free Trial Splunk observability    Documentation   Track service performance using dashboards in Splunk APM   Certification   Splunk Observability Cloud Certified Metrics User   Blogs    APM: Not an Infrastructure Monitoring Strategy APM Metrics: The Ultimate Guide   Training Courses   Using Splunk Application Performance Monitoring  
1. OK. This is _current_ configuration. It would be even better to see the output of splunk btool indexes list mack and | rest /services/data/indexes/mack But the question is what/how did you cha... See more...
1. OK. This is _current_ configuration. It would be even better to see the output of splunk btool indexes list mack and | rest /services/data/indexes/mack But the question is what/how did you change. 2. Did you check the reason for bucket rolling? index=_internal component=BucketMover idx=mack  
HEC on its own doesn't have filtering abilities. You can filter events after receiving them (on any input) using props and transforms but that doesn't change what you're sending over your WAN link. ... See more...
HEC on its own doesn't have filtering abilities. You can filter events after receiving them (on any input) using props and transforms but that doesn't change what you're sending over your WAN link. Your question is fairly generic and we don't have a lot of details about your environment so the answer is also only really generic in nature. Anyway, ingesting events to Splunk using Logstash might prove to be complicated unless you properly prepare your data in Logstash to conform to the format normally ingested by standard Splunk methods (otherwise you'd need to put some work to properly extract fields from such logstash-formatted events). But Logstash should give you ability to filter the data before sending it over HTTP to the HEC input.
For some reason it is not on Splunkbase. I could only find the .SPL files in a git repository at https://github.com/SplunkBAUG/CCA for TA_genesys_cloud-1.0.*.spl EDIT: As PickleRick suggested, whe... See more...
For some reason it is not on Splunkbase. I could only find the .SPL files in a git repository at https://github.com/SplunkBAUG/CCA for TA_genesys_cloud-1.0.*.spl EDIT: As PickleRick suggested, when you get third-party hosted applications like the one linked, you have no protections as would be offered by appinspect. It is highly recommended to check the contents for malicious code before installing it on your machine.
Do you get any events when you use this search? (You can also set the time range to be very large, in case the events from the log source are not in the past 24 hours. Also double-check that the sour... See more...
Do you get any events when you use this search? (You can also set the time range to be very large, in case the events from the log source are not in the past 24 hours. Also double-check that the source path is correct.) index=* source="/var/ltest/test.log"  
Some potential problems with your query are: 1. index=aaa(source="/var/log/testd.log") Does not have a space between the index and source filters 2. the match() functions in your eval env=case() p... See more...
Some potential problems with your query are: 1. index=aaa(source="/var/log/testd.log") Does not have a space between the index and source filters 2. the match() functions in your eval env=case() part should have valid regexes in the second argument of the match function, as in match(<field>,<regex>). Try this: | eval env=case(match(host, ".*10qe.*"), "Test", match(host, ".*10qe.*"), "QA", match(host, ".*10qe.*"), "Prod" ) ref: https://docs.splunk.com/Documentation/SCS/current/SearchReference/ConditionalFunctions
What I can say is I have nowhere near your understanding of Splunk operations.  I do appreciate your input. I am taking my limited understanding of our wholly-UF-to-Indexer environment, and applying... See more...
What I can say is I have nowhere near your understanding of Splunk operations.  I do appreciate your input. I am taking my limited understanding of our wholly-UF-to-Indexer environment, and applying what I know to solve the issue of reducing cloud-to-on-prem traffic over the WAN link from our new SaaS solution.  I keep a very low daily transfer rate (and licensing rate)  in our on-prem environment by blacklisting noise, and whitelisting the key events we want to track.  I have no rights on the source machines, and I cannot install a UF, or anything for that matter.  LogStash is the only option provided - which I assume requires HEC to receive the logs.  I have read that HEC supports white/black listing - which is where my question came from.    
Unfortunately not, as this app is "Not Supported" (as seen on the splunkbase page), so Splunk support can't help you with fixing the app. If you are using Splunk cloud and would like assistance with... See more...
Unfortunately not, as this app is "Not Supported" (as seen on the splunkbase page), so Splunk support can't help you with fixing the app. If you are using Splunk cloud and would like assistance with managing apps on Splunk Cloud, then Splunk support can probably help with getting the app onto your cloud instance.
Here's my configuration [mack] repFactor=auto coldPath = volume:cold/customer/mack/colddb homePath = volume:hot_warm/customer/mack/db thawedPath = /splunk/data/cold/customer/mack/thaweddb fro... See more...
Here's my configuration [mack] repFactor=auto coldPath = volume:cold/customer/mack/colddb homePath = volume:hot_warm/customer/mack/db thawedPath = /splunk/data/cold/customer/mack/thaweddb frozenTimePeriodInSecs = 34186680 maxHotBuckets = 10 maxTotalDataSizeMB = 400000 so instead of data rolling to cold, it rolls off
what is the error in the below query which i am using to populate in drop down list index=aaa(source="/var/log/testd.log") |stats count by host | eval env=case(match(host, "*10qe*"), "Test", match(h... See more...
what is the error in the below query which i am using to populate in drop down list index=aaa(source="/var/log/testd.log") |stats count by host | eval env=case(match(host, "*10qe*"), "Test", match(host, "*10qe*"), "QA", match(host, "*10qe*"), "Prod" )  
The configuration elements work where they are defined (but they may have additional impact on other functionalities due to mutual dependency - for example lowering output bandwidth on forwarder can ... See more...
The configuration elements work where they are defined (but they may have additional impact on other functionalities due to mutual dependency - for example lowering output bandwidth on forwarder can affect rate of input on some inputs (you can't slow down inputs working in "push" mode - you can just drop events if the the queue is full). So if you were to configure your HEC input to blacklist something, that would be working on the HEC input, not on other components. Having said that - what do you mean by blacklisting on HEC input? I don't recall any setting regarding http input filtering/blacklisting events. The closest thing to any filtering on HEC input would be the list of SANs allowed to connect and that's it. Even if you wanted to filter on the source forwarder, remember that filtering applies only to specific types of inputs - windows eventlog inputs can filter and ingest only some events and file monitor inputs can filter and ingest only certain files (still no event-level filtering). Maybe you could implement some form of filtering on the UF if you enabled additional processing on the UF itself but that's not very well documented (hardly documented at all if I were to be honest) and turning on this option is not recommended. So if you wanted to filter events before sending them downstream, you'd most probably need a HF which would do the parsing locally, fitler some of them out and then send across your WAN link but here we have two issues: 1) While it is called "http output", the forwarder doesn't use "normal" HEC to send events downstream but uses s2s tunelled over http connection. It's a completely different protocol. 2) HF parses data locally and sends the data parsed, not just cooked. That unfortunately means that it sends a whole lot of data more than UF normally sends as it sends data cooked. So "limiting" your bandwidth usage by installing a HF and filtering the data before sending might actually have the opposite effect because even though you might be sending less events (because some have been filtered out) you might actually be sending more data altogether (because you're sending parsed data instead of just cooked stream). Depending on the data you want to ingest, you might consider other options on the source side - if the events come from syslog sources you could set up a syslog receiver filtering data before passing them to Splunk, if you have files, you could preprocess them by external script. And so on.
i tried below: but it didnt return anything (source="/var/ltest/test.log") |table index    
"You must use services/collector/raw endpoint of Splunk HEC for data filtering to work." This is not entirely true. In fact it's not true at all But seriously, while the /event endpoint does ski... See more...
"You must use services/collector/raw endpoint of Splunk HEC for data filtering to work." This is not entirely true. In fact it's not true at all But seriously, while the /event endpoint does skip some parts of the ingestion queue and you can't affect line breaking or timestamp recognition (with exceptions) this way, your normal routing and filtering by means of transforms modifying _TCP_ROUTE or queue works perfectly ok.  
hi everybody. I have three Splunk instances in three docker containers on the same subnet . I have mapped port 8089 on port 8099 on each container. No firewalls between them. I checked the route fro... See more...
hi everybody. I have three Splunk instances in three docker containers on the same subnet . I have mapped port 8089 on port 8099 on each container. No firewalls between them. I checked the route from/to all containers (via port 8099) and there are no blocks and no issues... But when i try to add on of the containers splunk as a search peer in a distributed search deployment, i always receive the error "Error while sending public key to search peer" Any suggestion about this? Thank to everybody in advance.
hmm i might be doing something wrong still as i get the timechart but the results are all zeros and there should be a couple at least above zero  
@dc18 - If you are on Splunk Cloud try Data Manager - https://docs.splunk.com/Documentation/DM/1.8.3/User/AWSAbout , see if it can help.   If not Splunk Add-on for AWS would be your best bet.   I... See more...
@dc18 - If you are on Splunk Cloud try Data Manager - https://docs.splunk.com/Documentation/DM/1.8.3/User/AWSAbout , see if it can help.   If not Splunk Add-on for AWS would be your best bet.   I hope this helps!!
@rob_gibson - You need to filter on the source which is generating the data. And not send data at all to Splunk HEC.   Alternatively, You can install Splunk HF locally on the service. Create Spl... See more...
@rob_gibson - You need to filter on the source which is generating the data. And not send data at all to Splunk HEC.   Alternatively, You can install Splunk HF locally on the service. Create Splunk HEC input locally on Splunk HF. Update your data source to send data to local Splunk HF HEC instead of Splunk Indexers. You must use services/collector/raw endpoint of Splunk HEC for data filtering to work. https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector  Use nullQueue with regex to filter data on from going to Splunk. https://community.splunk.com/t5/Getting-Data-In/Filtering-events-using-NullQueue/m-p/66392  Forward data from Splunk HF to Splunk Indexers.   I would recommend not sending data to Splunk HEC at all directly by Data source would be simple solution. I hope this helps!!! 
When you keep hitting the size limit, Splunk will roll the buckets to frozen. That's the point. Some things worth verifying: 1) How did you increase the size limit? Which parameters did you edit an... See more...
When you keep hitting the size limit, Splunk will roll the buckets to frozen. That's the point. Some things worth verifying: 1) How did you increase the size limit? Which parameters did you edit and did you restart your splunkd? 2) How do you know buckets are frozen due to size limit? 3) Do you have volume size limits?
Sorry - learning a few things as I go here. Basically, I just need to compare the results of a search to a static known list of values. The search will return a list of values using stats. stats v... See more...
Sorry - learning a few things as I go here. Basically, I just need to compare the results of a search to a static known list of values. The search will return a list of values using stats. stats values(actualResults) as actualResults I guess I'm not 100% clear on what to do first to create the static list using makeresults, and then to append/use stats to combine - I have attempted to do so without getting the results I expect. If I were to put it in SQL terms, I'd have a reference table of known values ("My Item 1", "My Item 2", etc.) and a results table of data to search, and I'd do a left outer join: Ref Table: MY_REF_TABLE KNOWN_ITEM My Item 1 My Item 2 My Item 3 My Item 4   Results Table: MY_RESULTS_TABLE RESULT_ITEM My Item 1 My Item 3   Query: select KNOWN_ITEM, case when result_item is null then 'No Match' else 'Match' end HasMatch from MY_REF_TABLE left join MY_RESULTS_TABLE on KNOWN_ITEM= RESULT_ITEM Results: KNOWN_ITEM HASMATCH My Item 1 Match My Item 2 No Match My Item 3 Match My Item 4 No Match
The field name ("attribute") for index is "index".