All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@chrisyounger is the developer for the App, he should be able to help here.  
Hello, The first search does not work because ipv6 from the dropdown is in a compressed format from a different data source, while the ipv6 in the index is in not in a compressed format, so it has t... See more...
Hello, The first search does not work because ipv6 from the dropdown is in a compressed format from a different data source, while the ipv6 in the index is in not in a compressed format, so it has to go through a regex or function to convert it to a compressed format in the second search. Thank you for your help
@BoldKnowsNothin - Yes, it can be done with inputs.conf on the UF under the App's local folder. (Most likely Windows Add-on will have this input.) [Your input stanza that is collecting the data] bla... See more...
@BoldKnowsNothin - Yes, it can be done with inputs.conf on the UF under the App's local folder. (Most likely Windows Add-on will have this input.) [Your input stanza that is collecting the data] blacklist5 = 4674   blacklist5, the number 5 could be different depending on what you are deploying.   I hope this helps!! And welcome to the Splunk community!!!
and I want to identify IPs that do not belong to any of the IP address ranges in my results. Example : @karimoss - Do you want to print out all the possible IP addresses that are not in your res... See more...
and I want to identify IPs that do not belong to any of the IP address ranges in my results. Example : @karimoss - Do you want to print out all the possible IP addresses that are not in your result? * How is d.e.f.g IP different from other IPs in the result set? Is it the only IP with a different subnet? * Are you looking to find subnet vise?
Hello, I have a list of IPs generated from the following search : index=<source>| stats count by ip and I want to identify IPs that do not belong to any of the IP address ranges in my results. Exa... See more...
Hello, I have a list of IPs generated from the following search : index=<source>| stats count by ip and I want to identify IPs that do not belong to any of the IP address ranges in my results. Example :   a.b.c.101 a.b.c.102 a.b.c.103 d.e.f.g a.b.c.104 I want to keep only the address d.e.f.g Thank in advance for your help Regards,
I try that the app is work but have some issues such as cannot collect all data during a period that may related to data size in elk. finally, I use api connection to get the data as a csv and use S... See more...
I try that the app is work but have some issues such as cannot collect all data during a period that may related to data size in elk. finally, I use api connection to get the data as a csv and use Splunk to collect the data.
Hi @sohrab_keramat, I know that in the logs there isn't the information on the system that a log is passed through, so how can you say that a log is sent to an HF and it isn't sent to the Indexers? ... See more...
Hi @sohrab_keramat, I know that in the logs there isn't the information on the system that a log is passed through, so how can you say that a log is sent to an HF and it isn't sent to the Indexers? maybe you're sending log from the missed device only to one HF? have you other logs (e.g. Splunk internal logs) from that HF? did you tried to sen logs from that device to other HFs? did you checked the configurations on the HF to input logs from that device? Ciao. Giuseppe
Hello to all dear friends and fellow platformers I have 36 indexers and 7 heavy forwarders in my cluster. Every once in a while, I notice that one of the equipments that I receive logs from is not e... See more...
Hello to all dear friends and fellow platformers I have 36 indexers and 7 heavy forwarders in my cluster. Every once in a while, I notice that one of the equipments that I receive logs from is not entered into Splunk, and the log is actually reported from the source, but with further investigations, I realize that the log From the source means that the desired equipment is sent and received in one of the 7 HF, but the problem is that either the HF does not send to the indexers or the indexers do not index the log, so according to the Splunk system, the log is disconnected from the source of the equipment? a. Do you have a solution so that in the scenario of indexer clustering and a large number of HFs, I can find out whether the log is correctly outputted from the HF to the indexer or not? B. What is the cause and solution of this problem? THank you.
Hello Splunk Community, I'm encountering an issue with a custom app I've developed for Splunk. The app is designed to collect and analyze data from various sources, and it has been working perfectly... See more...
Hello Splunk Community, I'm encountering an issue with a custom app I've developed for Splunk. The app is designed to collect and analyze data from various sources, and it has been working perfectly until recently. However, after a recent update to Splunk, I've noticed that some of my custom data inputs are not functioning as expected. Specifically, I've configured data inputs using the modular input framework, and these inputs were collecting data without any problems. Now, I'm seeing errors in the logs related to these inputs, and data ingestion has become inconsistent. Has anyone else experienced similar issues with modular inputs after a Splunk update? Are there any known compatibility issues or changes in the latest Splunk version that might affect custom data inputs? I'd appreciate any insights or suggestions on how to troubleshoot and resolve this problem. Thanks in advance!
I get the feeling that optimization is the least of your problem here. I am trying to implement a behavioral rule, that checks if an ip was used in the last 7 days or not. ... [search index=<ind... See more...
I get the feeling that optimization is the least of your problem here. I am trying to implement a behavioral rule, that checks if an ip was used in the last 7 days or not. ... [search index=<index>operationName="Sign-in activity" earliest=-7d@d | ...]     It is just unclear what "used in the last 7 days" really mean because your mock code only constraints earliest.  The default latest is now().  So, that mock code (if not for the code error that @bowesmana pointed out) would have been exactly the same as if the main search starts at earliest=-7d@d latest=now.  In other words, you would have picked up everything from the beginning of the start of 7th day back to now().  There would have been no "false". @bowesmana interpreted your intention as thus: starting 7th day back, determine whether an IP address that appears in the current day had also appeared in the earlier days.  Is this the correct interpretation? If that is the requirement, the following should make the distinction. index=<index> operationName="Sign-in activity" NOT body.properties.deviceDetail.displayName=* earliest=-7d@d ```latest=now``` | eval history = if(_time < relative_time(now(), "@d"), "today", "past7") | stats values(history) as is_historical count by ipAddress | where is_historical == "today" ``` shorthand for "today" IN is_historical ``` | eval is_historical = if(is_historical == "past7", "true", "false")  
Maybe you can first answer the question why does the first search not satisfy your need?  In other words, what is that rex is supposed to accomplish?  If your data look like the following: _raw ... See more...
Maybe you can first answer the question why does the first search not satisfy your need?  In other words, what is that rex is supposed to accomplish?  If your data look like the following: _raw _time ip foo 1.1.1.1 bar 2023-09-23 00:44:01 1.1.1.1 foo 2.2.2.2 bar 2023-09-23 00:44:01 2.2.2.2 foo 2001:db8:3333:4444:5555:6666::2101 bar 2023-09-23 00:44:01 2001:db8:3333:4444:5555:6666::2101 foo 2001:db8:3333:4444:5555:6666::2102 bar 2023-09-23 00:44:01 2001:db8:3333:4444:5555:6666::2102 ip="$ip_token$" should pick up the correct event whether $ip_token$ is 1.1.1.1 (IPv4) or 2001:db8:3333:4444:5555:6666::2101 (IPv6).  What am I missing here?
Splunk commandment #3: Whenever you have the urge to join, purge that thought and restate the problem in clear terms. Based on the mock codes, your two subsearches contain the exact same terms excep... See more...
Splunk commandment #3: Whenever you have the urge to join, purge that thought and restate the problem in clear terms. Based on the mock codes, your two subsearches contain the exact same terms except one is one hour shorter than the other; the stats is also exactly the same except field name of the output.  I sense that the problem you are trying to solve is this: count unique dest_location-dest_ip combinations by src_ip in the two time intervals.  Is this correct? The following is a transliteration of the requirement.   index=<index1> src_ip IN (<srcvalues>) AND dest_ip!=<ipvalues> NOT dest_location IN ("<locvalues>") earliest=-24h ```latest=now``` | stats dc(if(_time < relative_time(now(), "-1h"), dest_location . "-" . dest_ip, null())) as oldconnections dc(eval(dest_location . "-" . dest_ip)) as allconnections by src_ip   It only performs one index search covering one time interval.  This is a lot more efficient than union on two largely overlapping subsearches.
Hi Team, Can it be possible to include 'update email' actions in "MS Graph for Office 365" SOAR App (similar to  "EWS for Office 365" App)? Thank you, CK
Hello comrades, We are using universal forwarder on hosts. And we have a noisy dude that products EventID:4674, and exceeds our license limit. Can we shut this dude's mouth on agent side but only on... See more...
Hello comrades, We are using universal forwarder on hosts. And we have a noisy dude that products EventID:4674, and exceeds our license limit. Can we shut this dude's mouth on agent side but only on eventID:4674? Sorry newbie here. Many thanks,
Is it possible to run different filter in an index search based on a condition in dropdown below? The second filter works for both ipv4 and ipv6, but it is slowing down the search.  I don't want ipv... See more...
Is it possible to run different filter in an index search based on a condition in dropdown below? The second filter works for both ipv4 and ipv6, but it is slowing down the search.  I don't want ipv4 going through my filter for ipv6. Thanks If select IPv4 dropdown box > select 1.1.1.1 ip_token=1.1.1.1 Search: | index=vulnerability_index ip="$ip_token$" if select IPv6 dropdown box > select  2001:db8:3333:4444:5555:6666::2101 ip_token=2001:db8:3333:4444:5555:6666::2101 Search: | index=vulnerability_index | rex mode=sed field=ip "s/<regex>/<replacement>/<flags>" | search ip="$ip_token$"
Talk to your Linux admin about that.  Splunk does not write to /var so that is not a Splunk file.
Hello Splunk community, I have an issue with a Splunk Deployment Server where FS /var is of size 30Gb and currently 22G are being used by the log "uncategorised.log" under the path /var/log/syslog. ... See more...
Hello Splunk community, I have an issue with a Splunk Deployment Server where FS /var is of size 30Gb and currently 22G are being used by the log "uncategorised.log" under the path /var/log/syslog. Is it viable/possible to delete that log or make a backup of it to a tape or a different server.?  
To get the payload in the  request info you need add below lines in restmaf.conf restmaf.conf   [script:upload_email_list] match = /data/email_sender/upload_email_list script ... See more...
To get the payload in the  request info you need add below lines in restmaf.conf restmaf.conf   [script:upload_email_list] match = /data/email_sender/upload_email_list script = upload_email_list.py scripttype = persist python.version = python3 handler = upload_email_list.UploadEmailHandler passPayload = true // Used to see payload in api call output_modes = json // output in json formate passHttpHeaders = true // Used to see headers in api call passHttpCookies = true // Used to see cookies in api call     Output: request info   request info {'output_mode': 'xml', 'output_mode_explicit': False, .... .... 'payload':'{"fileContent":"ravinandasana1998@gmail.com,ravisheart123@gmail.com"}' ..... }  
I'm trying to UNION two different tables containing info on foreign traffic - the first table is a log with time range earliest=-24h latest=-1h. The second are logs of those same systems for the full... See more...
I'm trying to UNION two different tables containing info on foreign traffic - the first table is a log with time range earliest=-24h latest=-1h. The second are logs of those same systems for the full 24 hours (earliest=-24h latest=now()). My search: | union [ search index=<index1> src_ip IN (<srcvalues>) AND dest_ip!=<ipvalues> NOT dest_location IN ("<locvalues>") earliest=-24h latest=-1h | eval dest_loc_ip1=dest_location. "-" .dest_ip | stats DC(dest_loc_ip1) as oldconnections by src_ip] [ search index=<index1> src_ip IN (<srcvalues>) AND dest_ip!=<ipvalues> NOT dest_location IN ("<locvalues>") earliest=-24h latest=now() | eval dest_loc_ip2=dest_location. "-" .dest_ip | stats DC(dest_loc_ip2) as allconnections by src_ip] | fields src_ip oldconnections allconnections I am trying to compare the values of oldconnections vs allconnections for only the original systems (basically a left join), but for some reason, the allconnections shows all null values. I get a similar issue when trying to left join - the allconnections values are not consistent to the values when I run the search by itself. I can run the two searches separately with the expected result, so I'm guessing there's an error in my UNION syntax and ordering. Thanks for the help! -also open to other ways to solve this
Hi What you found from /opt/splunk/var/log/splunk/splunkd.log? There should be a reason for exit. R. Ismo