All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Also remember that if you do manual extraction with the rex command and only then search on its results it will be much much slower than by simply searching the index because instead of finding the v... See more...
Also remember that if you do manual extraction with the rex command and only then search on its results it will be much much slower than by simply searching the index because instead of finding the value in the index splunk has to pass every event through the regex extraction and only then find matching events.
Your question is a bit vague so I'm not sure what you want so please be a little more descriptive. But from what you wrote I assume that you do some comditional aggregation and want to "go back" to r... See more...
Your question is a bit vague so I'm not sure what you want so please be a little more descriptive. But from what you wrote I assume that you do some comditional aggregation and want to "go back" to raw events fulfilling your conditions. You can't do that this way. Splunk "loses" all information not being explicitly passed from the command. So when you're doing the stats command only results of the stats command are available for further processing - the original events are no longer known in your pipeline. So you have to approach it differently. Probably adding some artificial "classifier" field or two but can't really say without knowing what exactly you want to achieve.
@akshada_s - If you are trying to run Splunk search from outside the script then jobs endpoint is usually the answer. Find more info here - https://docs.splunk.com/Documentation/Splunk/9.1.1/RESTTUT... See more...
@akshada_s - If you are trying to run Splunk search from outside the script then jobs endpoint is usually the answer. Find more info here - https://docs.splunk.com/Documentation/Splunk/9.1.1/RESTTUT/RESTsearches   I hope this helps!!!
i have a query where i am looking for multiple values with OR and then counting the occurrence with the stats the query is something like this  index=**** ("value1") OR ("Value3") OR ... | stats ... See more...
i have a query where i am looking for multiple values with OR and then counting the occurrence with the stats the query is something like this  index=**** ("value1") OR ("Value3") OR ... | stats count(eval(searchmatch("vlaue1"))) as value1, count(eval(searchmatch("vlaue2"))) as value2 now I just want to collect only those values which are found which mean there count is greater than 0. How can I achieve this where only stats of the values are displayed which are found in the events   also search values are mostly ips, URLs , domains, etc Note: I'm making this query for dashboard
@chrisyounger is the developer for the App, he should be able to help here.  
Hello, The first search does not work because ipv6 from the dropdown is in a compressed format from a different data source, while the ipv6 in the index is in not in a compressed format, so it has t... See more...
Hello, The first search does not work because ipv6 from the dropdown is in a compressed format from a different data source, while the ipv6 in the index is in not in a compressed format, so it has to go through a regex or function to convert it to a compressed format in the second search. Thank you for your help
@BoldKnowsNothin - Yes, it can be done with inputs.conf on the UF under the App's local folder. (Most likely Windows Add-on will have this input.) [Your input stanza that is collecting the data] bla... See more...
@BoldKnowsNothin - Yes, it can be done with inputs.conf on the UF under the App's local folder. (Most likely Windows Add-on will have this input.) [Your input stanza that is collecting the data] blacklist5 = 4674   blacklist5, the number 5 could be different depending on what you are deploying.   I hope this helps!! And welcome to the Splunk community!!!
and I want to identify IPs that do not belong to any of the IP address ranges in my results. Example : @karimoss - Do you want to print out all the possible IP addresses that are not in your res... See more...
and I want to identify IPs that do not belong to any of the IP address ranges in my results. Example : @karimoss - Do you want to print out all the possible IP addresses that are not in your result? * How is d.e.f.g IP different from other IPs in the result set? Is it the only IP with a different subnet? * Are you looking to find subnet vise?
Hello, I have a list of IPs generated from the following search : index=<source>| stats count by ip and I want to identify IPs that do not belong to any of the IP address ranges in my results. Exa... See more...
Hello, I have a list of IPs generated from the following search : index=<source>| stats count by ip and I want to identify IPs that do not belong to any of the IP address ranges in my results. Example :   a.b.c.101 a.b.c.102 a.b.c.103 d.e.f.g a.b.c.104 I want to keep only the address d.e.f.g Thank in advance for your help Regards,
I try that the app is work but have some issues such as cannot collect all data during a period that may related to data size in elk. finally, I use api connection to get the data as a csv and use S... See more...
I try that the app is work but have some issues such as cannot collect all data during a period that may related to data size in elk. finally, I use api connection to get the data as a csv and use Splunk to collect the data.
Hi @sohrab_keramat, I know that in the logs there isn't the information on the system that a log is passed through, so how can you say that a log is sent to an HF and it isn't sent to the Indexers? ... See more...
Hi @sohrab_keramat, I know that in the logs there isn't the information on the system that a log is passed through, so how can you say that a log is sent to an HF and it isn't sent to the Indexers? maybe you're sending log from the missed device only to one HF? have you other logs (e.g. Splunk internal logs) from that HF? did you tried to sen logs from that device to other HFs? did you checked the configurations on the HF to input logs from that device? Ciao. Giuseppe
Hello to all dear friends and fellow platformers I have 36 indexers and 7 heavy forwarders in my cluster. Every once in a while, I notice that one of the equipments that I receive logs from is not e... See more...
Hello to all dear friends and fellow platformers I have 36 indexers and 7 heavy forwarders in my cluster. Every once in a while, I notice that one of the equipments that I receive logs from is not entered into Splunk, and the log is actually reported from the source, but with further investigations, I realize that the log From the source means that the desired equipment is sent and received in one of the 7 HF, but the problem is that either the HF does not send to the indexers or the indexers do not index the log, so according to the Splunk system, the log is disconnected from the source of the equipment? a. Do you have a solution so that in the scenario of indexer clustering and a large number of HFs, I can find out whether the log is correctly outputted from the HF to the indexer or not? B. What is the cause and solution of this problem? THank you.
Hello Splunk Community, I'm encountering an issue with a custom app I've developed for Splunk. The app is designed to collect and analyze data from various sources, and it has been working perfectly... See more...
Hello Splunk Community, I'm encountering an issue with a custom app I've developed for Splunk. The app is designed to collect and analyze data from various sources, and it has been working perfectly until recently. However, after a recent update to Splunk, I've noticed that some of my custom data inputs are not functioning as expected. Specifically, I've configured data inputs using the modular input framework, and these inputs were collecting data without any problems. Now, I'm seeing errors in the logs related to these inputs, and data ingestion has become inconsistent. Has anyone else experienced similar issues with modular inputs after a Splunk update? Are there any known compatibility issues or changes in the latest Splunk version that might affect custom data inputs? I'd appreciate any insights or suggestions on how to troubleshoot and resolve this problem. Thanks in advance!
I get the feeling that optimization is the least of your problem here. I am trying to implement a behavioral rule, that checks if an ip was used in the last 7 days or not. ... [search index=<ind... See more...
I get the feeling that optimization is the least of your problem here. I am trying to implement a behavioral rule, that checks if an ip was used in the last 7 days or not. ... [search index=<index>operationName="Sign-in activity" earliest=-7d@d | ...]     It is just unclear what "used in the last 7 days" really mean because your mock code only constraints earliest.  The default latest is now().  So, that mock code (if not for the code error that @bowesmana pointed out) would have been exactly the same as if the main search starts at earliest=-7d@d latest=now.  In other words, you would have picked up everything from the beginning of the start of 7th day back to now().  There would have been no "false". @bowesmana interpreted your intention as thus: starting 7th day back, determine whether an IP address that appears in the current day had also appeared in the earlier days.  Is this the correct interpretation? If that is the requirement, the following should make the distinction. index=<index> operationName="Sign-in activity" NOT body.properties.deviceDetail.displayName=* earliest=-7d@d ```latest=now``` | eval history = if(_time < relative_time(now(), "@d"), "today", "past7") | stats values(history) as is_historical count by ipAddress | where is_historical == "today" ``` shorthand for "today" IN is_historical ``` | eval is_historical = if(is_historical == "past7", "true", "false")  
Maybe you can first answer the question why does the first search not satisfy your need?  In other words, what is that rex is supposed to accomplish?  If your data look like the following: _raw ... See more...
Maybe you can first answer the question why does the first search not satisfy your need?  In other words, what is that rex is supposed to accomplish?  If your data look like the following: _raw _time ip foo 1.1.1.1 bar 2023-09-23 00:44:01 1.1.1.1 foo 2.2.2.2 bar 2023-09-23 00:44:01 2.2.2.2 foo 2001:db8:3333:4444:5555:6666::2101 bar 2023-09-23 00:44:01 2001:db8:3333:4444:5555:6666::2101 foo 2001:db8:3333:4444:5555:6666::2102 bar 2023-09-23 00:44:01 2001:db8:3333:4444:5555:6666::2102 ip="$ip_token$" should pick up the correct event whether $ip_token$ is 1.1.1.1 (IPv4) or 2001:db8:3333:4444:5555:6666::2101 (IPv6).  What am I missing here?
Splunk commandment #3: Whenever you have the urge to join, purge that thought and restate the problem in clear terms. Based on the mock codes, your two subsearches contain the exact same terms excep... See more...
Splunk commandment #3: Whenever you have the urge to join, purge that thought and restate the problem in clear terms. Based on the mock codes, your two subsearches contain the exact same terms except one is one hour shorter than the other; the stats is also exactly the same except field name of the output.  I sense that the problem you are trying to solve is this: count unique dest_location-dest_ip combinations by src_ip in the two time intervals.  Is this correct? The following is a transliteration of the requirement.   index=<index1> src_ip IN (<srcvalues>) AND dest_ip!=<ipvalues> NOT dest_location IN ("<locvalues>") earliest=-24h ```latest=now``` | stats dc(if(_time < relative_time(now(), "-1h"), dest_location . "-" . dest_ip, null())) as oldconnections dc(eval(dest_location . "-" . dest_ip)) as allconnections by src_ip   It only performs one index search covering one time interval.  This is a lot more efficient than union on two largely overlapping subsearches.
Hi Team, Can it be possible to include 'update email' actions in "MS Graph for Office 365" SOAR App (similar to  "EWS for Office 365" App)? Thank you, CK
Hello comrades, We are using universal forwarder on hosts. And we have a noisy dude that products EventID:4674, and exceeds our license limit. Can we shut this dude's mouth on agent side but only on... See more...
Hello comrades, We are using universal forwarder on hosts. And we have a noisy dude that products EventID:4674, and exceeds our license limit. Can we shut this dude's mouth on agent side but only on eventID:4674? Sorry newbie here. Many thanks,
Is it possible to run different filter in an index search based on a condition in dropdown below? The second filter works for both ipv4 and ipv6, but it is slowing down the search.  I don't want ipv... See more...
Is it possible to run different filter in an index search based on a condition in dropdown below? The second filter works for both ipv4 and ipv6, but it is slowing down the search.  I don't want ipv4 going through my filter for ipv6. Thanks If select IPv4 dropdown box > select 1.1.1.1 ip_token=1.1.1.1 Search: | index=vulnerability_index ip="$ip_token$" if select IPv6 dropdown box > select  2001:db8:3333:4444:5555:6666::2101 ip_token=2001:db8:3333:4444:5555:6666::2101 Search: | index=vulnerability_index | rex mode=sed field=ip "s/<regex>/<replacement>/<flags>" | search ip="$ip_token$"
Talk to your Linux admin about that.  Splunk does not write to /var so that is not a Splunk file.