All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I appreciate the response. Updating the macro doesn't seem to make any real difference. I am going to reach out to SentinelOne and see what they have to say, if anything. 
Hi I haven’t use this https://splunk.github.io/splunk-connect-for-snmp/v1.9.0/ , but probably it’s something what you should at least look? r. Ismo
Hi i’m not sure if I understand correctly how you have installed ad configured it? Have you followed this instructions where to install it https://splunk.github.io/splunk-add-on-for-microsoft-office... See more...
Hi i’m not sure if I understand correctly how you have installed ad configured it? Have you followed this instructions where to install it https://splunk.github.io/splunk-add-on-for-microsoft-office-365/Install/ ? And then followed this how to configure it https://splunk.github.io/splunk-add-on-for-microsoft-office-365/ConfigureAppinAzureAD/ ? Following those steps it should work. If not then you should look troubleshooting from here https://splunk.github.io/splunk-add-on-for-microsoft-office-365/Troubleshooting/  r. Ismo
You said, that you are running Splunk Web also in this machine. Do you mean Splunk Enterprise in single instance installation? If so then you don’t need / should run separate UF on same box. You can c... See more...
You said, that you are running Splunk Web also in this machine. Do you mean Splunk Enterprise in single instance installation? If so then you don’t need / should run separate UF on same box. You can collect everything with it as with UF and actual much more if needed.
Hi Why you don’t use e.g. Splunk Operator for Kubernetes or Splunk’s docker version? https://splunk.github.io/splunk-operator/ and https://github.com/splunk/docker-splunk r. Ismo
In recent splunk versions there are INGEST_EVAL in transforms.conf. With it you could select correct timestamp field and convert it to epoch if needed. Here is one old post where you could see the ide... See more...
In recent splunk versions there are INGEST_EVAL in transforms.conf. With it you could select correct timestamp field and convert it to epoch if needed. Here is one old post where you could see the idea how it works. https://community.splunk.com/t5/Getting-Data-In/How-to-apply-source-file-date-using-INGEST-as-Time/m-p/596865
Can you describe how you have done this migration to the new master? There are several ways to do this and some works better than another. Here is one which I have used successfully. https://communit... See more...
Can you describe how you have done this migration to the new master? There are several ways to do this and some works better than another. Here is one which I have used successfully. https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538062
If you want this kind of feature into this TA, you must ask it from splunk support and/or ideas.splunk.com.
That works.  I was really trying to have a custom alert message with just the thresholds (since my query categorizes different error types and is fairly long, I was hoping not to put it in the alert ... See more...
That works.  I was really trying to have a custom alert message with just the thresholds (since my query categorizes different error types and is fairly long, I was hoping not to put it in the alert email).  However, I think putting the whole query is fine at the end of the day, thanks!
I agree about recommending to avoid using UF even if technically possible.  HEC should be protected by certificates which HF can do very easily.  The UF was designed with the thought(assumption) it w... See more...
I agree about recommending to avoid using UF even if technically possible.  HEC should be protected by certificates which HF can do very easily.  The UF was designed with the thought(assumption) it would read from local storage and forward. The HF and previously intermediate forwarder (which was just HF lite) was designed to receive and forward. Because of the design intent I am assuming a more robust secure testing would occur at the HF than the UF for any issues.
Definitely you should move old logs into some other archive directory on source side. Depending on OS and its version, your current situation could be a big bottleneck soon or even it could be it alre... See more...
Definitely you should move old logs into some other archive directory on source side. Depending on OS and its version, your current situation could be a big bottleneck soon or even it could be it already. I have seen environments where even ls or dir didn’t work due to number of files. IgnoreOlderThan is what you should/could try, BUT you must remember that it’s looking for file modification time. If someone somehow update mtime of file then splunk read it no matter of whenever has really modified. ignoreOlderThan = <non-negative integer>[s|m|h|d] * The monitor input compares the modification time on files it encounters with the current time. If the time elapsed since the modification time is greater than the value in this setting, Splunk software puts the file on the ignore list. * Files on the ignore list are not checked again until the Splunk platform restarts, or the file monitoring subsystem is reconfigured. This is true even if the file becomes newer again at a later time. * Reconfigurations occur when changes are made to monitor or batch inputs through Splunk Web or the command line. * Use 'ignoreOlderThan' to increase file monitoring performance when monitoring a directory hierarchy that contains many older, unchanging files, and when removing or adding a file to the deny list from the monitoring location is not a reasonable option. * Do NOT select a time that files you want to read could reach in age, even temporarily. Take potential downtime into consideration! * Suggested value: 14d, which means 2 weeks * For example, a time window in significant numbers of days or small numbers of weeks are probably reasonable choices. * If you need a time window in small numbers of days or hours, there are other approaches to consider for performant monitoring beyond the scope of this setting. * NOTE: Most modern Windows file access APIs do not update file modification time while the file is open and being actively written to. Windows delays updating modification time until the file is closed. Therefore you might have to choose a larger time window on Windows hosts where files may be open for long time periods. * Value must be: <number><unit>. For example, "7d" indicates one week. * Valid units are "d" (days), "h" (hours), "m" (minutes), and "s" (seconds). * No default, meaning there is no threshold and no files are ignored for modification time reasons
Perfmon can be tricky. See my thread here https://community.splunk.com/t5/Getting-Data-In/Debugging-perfmon-input/m-p/621539#M107042 Even if you don't have the same problem, you can see how to speci... See more...
Perfmon can be tricky. See my thread here https://community.splunk.com/t5/Getting-Data-In/Debugging-perfmon-input/m-p/621539#M107042 Even if you don't have the same problem, you can see how to specify counters.
There is also user context which needs also app context. https://docs.splunk.com/Documentation/Splunk/9.3.2/Troubleshooting/Usebtooltotroubleshootconfigurations
You can use ignoreOlderThan option but be aware that the forwarder still has to keep track of file existence and metadata so if you have many files in the source directory you might need to raise pro... See more...
You can use ignoreOlderThan option but be aware that the forwarder still has to keep track of file existence and metadata so if you have many files in the source directory you might need to raise process limits and you'll be wasting your resources on files you don't care about.
I am in the process of implementing Splunk in a fairly long-lived environment.  Log directories contain date-masked log files.  I would like to ignore files before today's date, and only import new f... See more...
I am in the process of implementing Splunk in a fairly long-lived environment.  Log directories contain date-masked log files.  I would like to ignore files before today's date, and only import new files.  Example:  /opt/someApplication/logs/someApplication.202412160600.out I am unable to wildcard /opt/someApplication/logs/someApplication.*.out as there are logs dating back to 2017 and I'd exceed our daily license/quota by several orders of magnitude.  Changing the logging format is not an option.  Exclude-lists appear to be a solution, but even using regex would be incredibly burdensome. Thoughts?    
I am new to Splunk and am teaching myself how to us it as I integrate it with my environment. I inherited an existing Splunk Enterprise instance that, at one point, apparently used to work to some d... See more...
I am new to Splunk and am teaching myself how to us it as I integrate it with my environment. I inherited an existing Splunk Enterprise instance that, at one point, apparently used to work to some degree but by the time I joined the team and took over had fallen into disuse. After getting it upgraded from 9.0 to 9.3.2, rolling out Universal Forwarders, tinkering with inputs.conf, and fixing some network issues, I found myself finally able to get Windows Event Log data into my indexer from a couple of different test machines. The inputs.conf I was using was something I had found on one of the existing machines before reinstalling the UF, and I noticed that it had a lot more stuff in it than Windows Event Log stanzas.  Some of it was suggesting it monitored stuff I was interested in right now, such as CPU utilization.  However, I noticed that exactly nothing outside of Windows Event data was ever making it across the wire, no matter how I reconfigured the inputs.conf stanzas. The one I honed in on first was the CPU utilization, and through research I discovered that when I invoke a stanza in inputs.conf it has to exist in some degree within the Settings > Data Inputs library (?) present on my Splunk instance. perfmon://CPU, perfmon://CPULoad, and perfmon://Processor were all stanzas I found online for (among other things) checking to see what % CPU utilization a target server was at.  None of them worked.  Looking into these Data Inputs, it looks like something is broken - when I select these three (as an example) Splunk's web UI throws up an error saying that "Processor is not a valid object".   Following some guidance online, I was able to make my own custom Data Input just called testCPU, pointing at a custom index I call testWindows, and basically make it a clone of CPU (taking in % Processor Time and % User Time as counters and whatnot).  For the required object, I noticed that "Processor Information" was an option I could pick rather than "Processor", so I went with that one.  I then deployed a stanza in inputs.conf that says perfmon://testCPU on one of my UFs, and it absolutely works.  My Indexer is now pulling in CPU % use information.  I suspect if I went back to the three CPU-related entries above and set it to "Processor Information", it would work and any of the existing Apps I inherited that invoke those stanzas would themselves start pulling in data through it. However, I do not know why my built-in Data Inputs are broken - it isn't just limited to the CPU ones I used as an example above.  For example, the "System" input claims "System is not a valid object" and the available objects dropdown does not have an obvious replacement (there's no "System Information" to follow the pattern above).  The "PhysicalDisk" DI claims "PhysicalDisk is not a valid object" but has nothing obvious to replace it either.  Available Memory claims "Memory" is not a valid object with no obvious replacement, etc. Does anyone know what might be going on here?  Looking at how the Stanzas are configured online the examples I see for the handful above I have looked into do in fact invoke object = "xxx" that matches the names of things my Splunk says isn't valid.  Some of these might have some obvious replacements ("Network" might be "Physical Network Card Activity" or something like that) but a lot of them don't. How should I go fix these?  My first assumption was that I would find some kind of "Objects" config file that may have clues to how these got redefined, but that wasn't the case. I have a ticket in with support, but I am broadening the scope here to see if anyone else has familiarity with something like this (and also to create something for another user with the same issue to find in the future).
That should work already. Could you try putting that search filter at the end of your alert search? <yoursearch> | search (errorType = "Client" AND count > 8 ) OR (errorType = "Credentials" AND coun... See more...
That should work already. Could you try putting that search filter at the end of your alert search? <yoursearch> | search (errorType = "Client" AND count > 8 ) OR (errorType = "Credentials" AND count > 8 ) OR (errorType = "Other" AND count > 8 )  
Try excluding Up states at the end rather than at the beginning. index="network" %BGP-5 *clip* | dedup src_ip | stats count by state_to,Device_name,src_ip | where state_to!="Up"
At first glance I would suspect that the search filters for your roles are contradicting each other and filtering out all events. E.g. if you have the following roles with search filters: ROLE A - ... See more...
At first glance I would suspect that the search filters for your roles are contradicting each other and filtering out all events. E.g. if you have the following roles with search filters: ROLE A - (index=index1 sourcetype=something) ROLE B - (index=index2 sourcetype=something) Then if you have role A and B, then Splunk will force you to search with "(index=index1 sourcetype=something) (index=index2 sourcetype=something)" which will retrieve 0 events because none exist in both index1 and index2 at the same time. Are you able to post your sanitized search filters to look for contradictory filters?
Officially HEC isn't supported on UF. I read several times that it works but never actually got to testing it myself.