All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Are you asking how to configure Telegraf to poll external devices using SNMP? That's out of scope of this forum since it has nothing to do with Splunk as such. The addon you listed is for ingesting m... See more...
Are you asking how to configure Telegraf to poll external devices using SNMP? That's out of scope of this forum since it has nothing to do with Splunk as such. The addon you listed is for ingesting metrics data from Telegraf (already received by its inputs) to Splunk.
Ok. Do you mean that you redefined the Datamodel itself or just changed the acceleratio  parameters? And are you talking about the dataset definitions or the summarized data in context of it being no... See more...
Ok. Do you mean that you redefined the Datamodel itself or just changed the acceleratio  parameters? And are you talking about the dataset definitions or the summarized data in context of it being not in sync? How did you modify those configurations? Do you have the same settings defines within an app pushed from the deployer?  
Wait a second. Splunkbase is a channel for application distribution. While in a standalone server setups you can pull an app directly from Splunkbase it's not meant to be your deployment server. Tr... See more...
Wait a second. Splunkbase is a channel for application distribution. While in a standalone server setups you can pull an app directly from Splunkbase it's not meant to be your deployment server. Trying to pull some tricks with application ID and renaming "in place" is a relatively ugly solution. Why not just release a new app and provide a docs for migration between those "versions"?
Maybe this app https://splunkbase.splunk.com/app/6368 helps you to see what you have in props.conf in your search context?
Hi as you have renamed and changed AppId then this is totally new application without any reference into the old one. There is no automatic way how you could migrate those all KOs from old app and e... See more...
Hi as you have renamed and changed AppId then this is totally new application without any reference into the old one. There is no automatic way how you could migrate those all KOs from old app and especially from user private folders. If those installations are in onprem then you could use e.g. this script/solution https://community.splunk.com/t5/Dashboards-Visualizations/Can-we-move-the-saved-searches-or-knowledge-objects-created/m-p/672741/highlight/true#M55102 You could try to modify this script to work remotely with Splunk Cloud, but it needs some work and I don’t be sure that can you even do it? I have no experience how to remove app from splunkbase. Probably it can do with service request? At least you could update the old app and tell that everyone should use your new one.  r. Ismo
Actually TLS mutual authentication is done by the openssl library and can be configured on an intermediate UF as well (did it myself several times on s2s inputs). It's just that http input isn't off... See more...
Actually TLS mutual authentication is done by the openssl library and can be configured on an intermediate UF as well (did it myself several times on s2s inputs). It's just that http input isn't officially supported on UF (any documentation about HEC mentions only Splunk Enterprise or Cloud). So in case anything goes sideways first thing you'll hear from support is "use a HF instead of UF".
Basically you shouldn’t migrate both os and splunk at the same time. Just select which one you do first and after you have finalized it and check couple of days that everything is ok, then do the seco... See more...
Basically you shouldn’t migrate both os and splunk at the same time. Just select which one you do first and after you have finalized it and check couple of days that everything is ok, then do the second migration. Of course if you have new host where to migrate then those os can be migrated earlier and just migrate splunk into those. Again you could migrate splunk before node migration or after it, but don’t try it the same time (e.g. new hosts have newer version). Here is how I have done it earlier https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538062 r. Ismo
Overall logic of your search is flawed. You firstly remove a lot of data with dedup and then try to stats over hugely incomplete data set. What is it you're trying to do (in your own words, without ... See more...
Overall logic of your search is flawed. You firstly remove a lot of data with dedup and then try to stats over hugely incomplete data set. What is it you're trying to do (in your own words, without SPL)?
Have you check that your SHC is healthy and there is no issues e.g. with kvstore or other replications? Easiest this can do with MC or if you haven’t set it up, then you can do those by queries from i... See more...
Have you check that your SHC is healthy and there is no issues e.g. with kvstore or other replications? Easiest this can do with MC or if you haven’t set it up, then you can do those by queries from internal indexes, rest api and cli.
I appreciate the response. Updating the macro doesn't seem to make any real difference. I am going to reach out to SentinelOne and see what they have to say, if anything. 
Hi I haven’t use this https://splunk.github.io/splunk-connect-for-snmp/v1.9.0/ , but probably it’s something what you should at least look? r. Ismo
Hi i’m not sure if I understand correctly how you have installed ad configured it? Have you followed this instructions where to install it https://splunk.github.io/splunk-add-on-for-microsoft-office... See more...
Hi i’m not sure if I understand correctly how you have installed ad configured it? Have you followed this instructions where to install it https://splunk.github.io/splunk-add-on-for-microsoft-office-365/Install/ ? And then followed this how to configure it https://splunk.github.io/splunk-add-on-for-microsoft-office-365/ConfigureAppinAzureAD/ ? Following those steps it should work. If not then you should look troubleshooting from here https://splunk.github.io/splunk-add-on-for-microsoft-office-365/Troubleshooting/  r. Ismo
You said, that you are running Splunk Web also in this machine. Do you mean Splunk Enterprise in single instance installation? If so then you don’t need / should run separate UF on same box. You can c... See more...
You said, that you are running Splunk Web also in this machine. Do you mean Splunk Enterprise in single instance installation? If so then you don’t need / should run separate UF on same box. You can collect everything with it as with UF and actual much more if needed.
Hi Why you don’t use e.g. Splunk Operator for Kubernetes or Splunk’s docker version? https://splunk.github.io/splunk-operator/ and https://github.com/splunk/docker-splunk r. Ismo
In recent splunk versions there are INGEST_EVAL in transforms.conf. With it you could select correct timestamp field and convert it to epoch if needed. Here is one old post where you could see the ide... See more...
In recent splunk versions there are INGEST_EVAL in transforms.conf. With it you could select correct timestamp field and convert it to epoch if needed. Here is one old post where you could see the idea how it works. https://community.splunk.com/t5/Getting-Data-In/How-to-apply-source-file-date-using-INGEST-as-Time/m-p/596865
Can you describe how you have done this migration to the new master? There are several ways to do this and some works better than another. Here is one which I have used successfully. https://communit... See more...
Can you describe how you have done this migration to the new master? There are several ways to do this and some works better than another. Here is one which I have used successfully. https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538062
If you want this kind of feature into this TA, you must ask it from splunk support and/or ideas.splunk.com.
That works.  I was really trying to have a custom alert message with just the thresholds (since my query categorizes different error types and is fairly long, I was hoping not to put it in the alert ... See more...
That works.  I was really trying to have a custom alert message with just the thresholds (since my query categorizes different error types and is fairly long, I was hoping not to put it in the alert email).  However, I think putting the whole query is fine at the end of the day, thanks!
I agree about recommending to avoid using UF even if technically possible.  HEC should be protected by certificates which HF can do very easily.  The UF was designed with the thought(assumption) it w... See more...
I agree about recommending to avoid using UF even if technically possible.  HEC should be protected by certificates which HF can do very easily.  The UF was designed with the thought(assumption) it would read from local storage and forward. The HF and previously intermediate forwarder (which was just HF lite) was designed to receive and forward. Because of the design intent I am assuming a more robust secure testing would occur at the HF than the UF for any issues.
Definitely you should move old logs into some other archive directory on source side. Depending on OS and its version, your current situation could be a big bottleneck soon or even it could be it alre... See more...
Definitely you should move old logs into some other archive directory on source side. Depending on OS and its version, your current situation could be a big bottleneck soon or even it could be it already. I have seen environments where even ls or dir didn’t work due to number of files. IgnoreOlderThan is what you should/could try, BUT you must remember that it’s looking for file modification time. If someone somehow update mtime of file then splunk read it no matter of whenever has really modified. ignoreOlderThan = <non-negative integer>[s|m|h|d] * The monitor input compares the modification time on files it encounters with the current time. If the time elapsed since the modification time is greater than the value in this setting, Splunk software puts the file on the ignore list. * Files on the ignore list are not checked again until the Splunk platform restarts, or the file monitoring subsystem is reconfigured. This is true even if the file becomes newer again at a later time. * Reconfigurations occur when changes are made to monitor or batch inputs through Splunk Web or the command line. * Use 'ignoreOlderThan' to increase file monitoring performance when monitoring a directory hierarchy that contains many older, unchanging files, and when removing or adding a file to the deny list from the monitoring location is not a reasonable option. * Do NOT select a time that files you want to read could reach in age, even temporarily. Take potential downtime into consideration! * Suggested value: 14d, which means 2 weeks * For example, a time window in significant numbers of days or small numbers of weeks are probably reasonable choices. * If you need a time window in small numbers of days or hours, there are other approaches to consider for performant monitoring beyond the scope of this setting. * NOTE: Most modern Windows file access APIs do not update file modification time while the file is open and being actively written to. Windows delays updating modification time until the file is closed. Therefore you might have to choose a larger time window on Windows hosts where files may be open for long time periods. * Value must be: <number><unit>. For example, "7d" indicates one week. * Valid units are "d" (days), "h" (hours), "m" (minutes), and "s" (seconds). * No default, meaning there is no threshold and no files are ignored for modification time reasons