All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @smallwonder , when you say limit the amount of data, are you meaning: limiting the files to read or filter events? if limiting the files to read, you can add whitelist and blacklist options to ... See more...
Hi @smallwonder , when you say limit the amount of data, are you meaning: limiting the files to read or filter events? if limiting the files to read, you can add whitelist and blacklist options to your inputs.conf. If instead you want to filter sone data, you have to identify one or more regexes to filter your logs (positive or negative filtering), and then apply the method described at https://docs.splunk.com/Documentation/Splunk/9.3.2/Forwarding/Routeandfilterdatad Remember that these filters must be applied in the first full Splunk instance they are passing through, in other words on the first Heavy Forwarder present or on Indexers, not on Universal Forwarders. Ciao. giuseppe
Do you mean the control panel?
How do I limit the amount of data coming over from  [monitor://path/to/file] in my splunk forwarder inputs.conf file.  I did see whitelist directory and blacklist directory. Any other ways to lim... See more...
How do I limit the amount of data coming over from  [monitor://path/to/file] in my splunk forwarder inputs.conf file.  I did see whitelist directory and blacklist directory. Any other ways to limit the log files from for example WinFIM from exceeding the data.
Reading through the Ideas, there are a few written different ways that will yield the same result. This is the simplest explanation, https://ideas.splunk.com/ideas/PLECID-I-606. If we can use * as a ... See more...
Reading through the Ideas, there are a few written different ways that will yield the same result. This is the simplest explanation, https://ideas.splunk.com/ideas/PLECID-I-606. If we can use * as a literal, then it will help your problem too. What would be best is to be able to implement a regex statement. At my shop, it would be ok to do index=ABCDE*, but not index=A*.
Share the panel that is referencing the TimeRange with the error.
You need to see if the source exposes any sort of API to provide the information you need.  Or you can script a curl statement to see if the initial HTTP response code is in the 400's but that's far ... See more...
You need to see if the source exposes any sort of API to provide the information you need.  Or you can script a curl statement to see if the initial HTTP response code is in the 400's but that's far more prone to complications if you take into account any of the numerous things the domain owners can do with their service.
hello @dural_yyz , This is the source code for the control and token {     "options": {         "defaultValue": "-24h@h,now",         "token": "TimeRange"     },     "title": "Time Select... See more...
hello @dural_yyz , This is the source code for the control and token {     "options": {         "defaultValue": "-24h@h,now",         "token": "TimeRange"     },     "title": "Time Selection",     "type": "input.timerange" }   see the picture for the panel 
Please share the source code for the Time Selection dropdown and for the search panel you are referencing the token.
I want to schedule data learning for a source and alert me more accurately when the data gets closer to zero and this behavior is not normal. I am currently using Forecast time series with a learning... See more...
I want to schedule data learning for a source and alert me more accurately when the data gets closer to zero and this behavior is not normal. I am currently using Forecast time series with a learning time of 150 days backwards but it generates false alerts, any suggestions to adapt my model?
@PickleRick , Sorry for the late answer, you are rigth, i think we might have misunderstood how some attributes work in the indexes.conf and thus it was not strong enough to force the rolling of the... See more...
@PickleRick , Sorry for the late answer, you are rigth, i think we might have misunderstood how some attributes work in the indexes.conf and thus it was not strong enough to force the rolling of the warm buckets. We will surely rework the conf and see what happens but i think that was the main issue. Thanks a lot for your time and answers !
Hello @ITWhisperer , I tried this and got this same error: Invalid value "$TimeRange.earliest$" for time term 'earliest
Try something like this earliest = $TimeRange.earliest$, latest=$TimeRange.latest$
See the Getting Data In manual.
Hi @PolarBear01 , the only way to have HA at Forwarders level is to have two or more Receivers (rsyslog or syslog-ng or SC4S) , so your receiver will work even if Splunk is down; with a Load Balanc... See more...
Hi @PolarBear01 , the only way to have HA at Forwarders level is to have two or more Receivers (rsyslog or syslog-ng or SC4S) , so your receiver will work even if Splunk is down; with a Load Balancer that distributes syslogs between them and manages fail over. Receivers can be located on UFs or on Hfs, I usually use rsyslog on UFs! I don't know what you mean with manual balancing, for a real HA, you need a Load Balancer that  works without any manual action. There's also the possibility to configure DNS for load balancing and fail over managing, but DNS usually responds with a delay in case of fault of one receiver, so you loose first logs, for this reason a real Load balancer (e.g. F5) is the best solution for a real HA. The HFs are useful if you want to concentrate all logs before to send them to Splunk Cloud, otherwise (on premise) it isn't mandatory. Ciao. Giuseppe
Hi folks, I'm having a hard time picking the right architecture for setting up a solution to gain high availability of my syslog inputs. My current setup is: - 4 UFs - 2 HFs - Splunk Cloud Sysl... See more...
Hi folks, I'm having a hard time picking the right architecture for setting up a solution to gain high availability of my syslog inputs. My current setup is: - 4 UFs - 2 HFs - Splunk Cloud Syslog is now being ingested on one of the HFs as a network input. I saw that to solve my isssue I could injest my syslog logs on a UF and forward them to my HFs taking advantage of the built-in load balancing of the intermediate forwarders (aka HFs) which would simplify a lot the deployment. On the other hand another seen solution is manually implementing a load balancing machine in front of the HFs to injest the syslog data and manually balance load. Which solution is best suited for a splunk development? IMO 1st one is much more straight forward but I need to validate this is a correct aproach.   Thanks in advanced!
Hello, Can you please help to let me know what are the steps need to followed to do so? Thanks
Token was set using the time range control. see below image
Hi Team  Can you please help me to extract the data from the external website to Splunk Dashboard.  Is it possible ??  Example :  I've to fetch the below status from the website: "https://www.e... See more...
Hi Team  Can you please help me to extract the data from the external website to Splunk Dashboard.  Is it possible ??  Example :  I've to fetch the below status from the website: "https://www.ecb.europa.eu/"  Output in SPLUNK Dashboard: T2S is operating normally.  
Please show what is in your token and how you have set it
As you say: "is _not_ straightforward", and I agree, why I think the "solution" here is vague, and ought to be refined