All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Splunk does not have a IP version check per se.  But you can hack ipmask to your advantage.  ipmask only works with IPv4.  So, if you are confident that your query returns legitimate IP addresses, yo... See more...
Splunk does not have a IP version check per se.  But you can hack ipmask to your advantage.  ipmask only works with IPv4.  So, if you are confident that your query returns legitimate IP addresses, you can tell IPv4 from IPv6. | dbxquery query="select IP from tableCompany" | eval IP = if(isnull(ipmask("255.255.255.255", IP)), IP . "/128", IP . "/32") Here is a snippet to help you observe how ipmask works in this context: | makeresults | eval ip = mvappend("10.11.12.13", "::") | mvexpand ip | eval hostmask4 = ipmask("255.255.255.255", ip) Netmask 255.255.255.255 also serves as an IPv4 validator.  IPv6 can be validated using regex, but if your database is trustworthy, you can save this trouble.
I know this is an old one, but my searches brought me here and it might bring someone else here. After going through installing new java versions and all the JAVA HOME settings, I used my EDR tool... See more...
I know this is an old one, but my searches brought me here and it might bring someone else here. After going through installing new java versions and all the JAVA HOME settings, I used my EDR tool and noticed this file was being called: /opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/customized.java.path It had reference to the older java versions and not the new one. Updated the path in there.  So for anyone who finds this and has problems starting up the taskserver after updating Java. search for the "customized.java.path" file in the dbConnect app folders.  
Hello, The CSV file is derived from dbxQuery, so I need to figure out how to append/128 for ipv6 and /32 for ipv6. Does Splunk have a function to check if an IP is IPv4 or IPv6? | dbxquery query... See more...
Hello, The CSV file is derived from dbxQuery, so I need to figure out how to append/128 for ipv6 and /32 for ipv6. Does Splunk have a function to check if an IP is IPv4 or IPv6? | dbxquery query="select IP from tableCompany" |   eval IP = if ( isIPv4(IP),  IP=IP . "/32",  IP=IP . "/128") Thank you so much
I have installed the splunk forwarder on a Windows 10 VM and have splunk installed on a Debian VM. I have restarted the splunk forwarder on the Win10 VM but when i log into splunk enterprise on the D... See more...
I have installed the splunk forwarder on a Windows 10 VM and have splunk installed on a Debian VM. I have restarted the splunk forwarder on the Win10 VM but when i log into splunk enterprise on the Debian VM and go into Search & Reporting > Data Summary there is no listing of the Win10 VM in either hosts or source list.  Does anyone have any idea what i could be doing wrong or any suggestions of things i could try?
Thank you. 
The add-on requires Python so it must be installed on a HF.  This is per the docs at https://www.cisco.com/c/en/us/td/docs/security/firepower/70/api/eNcore/eNcore_Operations_Guide_v08.html#_Toc765564... See more...
The add-on requires Python so it must be installed on a HF.  This is per the docs at https://www.cisco.com/c/en/us/td/docs/security/firepower/70/api/eNcore/eNcore_Operations_Guide_v08.html#_Toc76556476 Consider standing up a separate HF for eStreamer inputs.
@mattymothis happened to us as well, but only when we moved to a load balancer in front of our indexers. Our previous step, which was HEC on a heavy forwarder, we never had this issue. Do you know if... See more...
@mattymothis happened to us as well, but only when we moved to a load balancer in front of our indexers. Our previous step, which was HEC on a heavy forwarder, we never had this issue. Do you know if this is specific to load balanced HEC?
dc outputs a number.  Just use it in a logical expression, like source="WinEventLog:Security" EventCode IN (628, 627, 4723, 4724) | stats values(Target_Account_Name) dc(Target_Account_Name) by Subje... See more...
dc outputs a number.  Just use it in a logical expression, like source="WinEventLog:Security" EventCode IN (628, 627, 4723, 4724) | stats values(Target_Account_Name) dc(Target_Account_Name) by Subject_Account_Name | where 'dc(Target_Account_Name)' > 5 Or more customarily,   source="WinEventLog:Security" EventCode IN (628, 627, 4723, 4724) | stats values(Target_Account_Name) as Target_Account_Name dc(Target_Account_Name) as Target_Account_Count by Subject_Account_Name | where Target_Account_Count > 5​ Alternatively, you do not need to use dc.  You can perform mvcount on aggregate Target_Account_Name to give more concise output. source="WinEventLog:Security" EventCode IN (628, 627, 4723, 4724) | stats values(Target_Account_Name) as Target_Account_Name by Subject_Account_Name | where mvcount(Target_Account_Name) > 5​  
Found one evidence that problem is network. At least, finally, I have proof that the network team has to fix it. Basically, I ran a network search from multiple srcs in the same subnet towards the H... See more...
Found one evidence that problem is network. At least, finally, I have proof that the network team has to fix it. Basically, I ran a network search from multiple srcs in the same subnet towards the HF:9997. And displayed the bytes_in. This one UF that I have a problem with has bytes_in=0. And the rest has bytes_in comparable to bytes_out. SPL: sourcetype=pan:traffic src=10.68.x.x/16 dest=10.68.p.q dest_port=9997 | stats sparkline(sum(bytes_out)) as bytes_out sparkline(sum(bytes_in)) as bytes_in sum(bytes_in) as total_bytes_return by src dest dest_port This SPL returns hundreds of rows and when I sort by total_bytes_return, there's a flat line for bytes_in and 0 for the field total_bytes_return for this UF in concern. I can sleep now and pass this over to network team.
1. a) Should I add /128 on all IPv6 on my CSV file to get this to work?     b) If yes, does it mean I need to put extra layer to check which one is IPv6 or IPv4 and then append /128? IPv4 is ... See more...
1. a) Should I add /128 on all IPv6 on my CSV file to get this to work?     b) If yes, does it mean I need to put extra layer to check which one is IPv6 or IPv4 and then append /128? IPv4 is 32-bit, IPv6 is 128-bit.  This means that if your CVS only contains host addresses, you need to use /128 with all IPv6 entries and /32 with all IPv4 entries. 2. Will OUPUTNEW work just fine as regular lookup? 3. a) If I update CSV file (with new fields), will the definition lookup still work? CIDR(ip) does not change any other aspect of lookup. 3.      b) Is there a way to automate update on the definition lookup?          I plan on creating automatic update on CSV, but it looks like the definition ties on specific field. Not sure what you mean by automation.  If you mean in the background with some external utilities, certainly.  Once lookup is defined, all you need to do is to update the file. (In distributed deployment, however, you do need to take care to update every search head.) In Splunk, you can take a look at outputlookup.  You can use a Splunk search to update an existing lookup (even create a new one). 4.  Note that if I use /120, it could return multiple result like the following: expected ip test mask 2 test mask 4 test mask 6 2001:db8:3333:4444:5555:6666::2101   That is precisely what netmask does. (Using CIDR for host address is just a special, and less common use case.)  You can read about IP address spaces, subnet, and CIDR in a variety of online resources.
Instead of muscling SPL to give you lots of "OR" expressions (which also slows down performance), it is much more profitable to change your search that will use this token to match distinct values. ... See more...
Instead of muscling SPL to give you lots of "OR" expressions (which also slows down performance), it is much more profitable to change your search that will use this token to match distinct values. First, change $my_token$ definition from a logical expression to simple enumeration.   | inputlookup errorLogs | where RunStartTimeStamp == "2023-01-26-15.47.24.000000" | where HostName == "myhost.com" | where JobName == "runJob1" | where InvocationId == "daily" | eval RunID = coalesce(RunID, ControllingRunID) | stats values(RunID) as RunID   This gives RunID = ("12345", "67890").  Use this value as $my_token$. Then, in your search, do the same.   <search setups> (RunID IN ($my_token$) OR ControllingRunID IN ($my_token$))    
Hi, I am trying to run a search and have tokens setting various search items, what I need is to create a search from an input file and have one field referenced many times for different fields. ... See more...
Hi, I am trying to run a search and have tokens setting various search items, what I need is to create a search from an input file and have one field referenced many times for different fields. My search is:     | inputlookup errorLogs | where RunStartTimeStamp == "2023-01-26-15.47.24.000000" | where HostName == "myhost.com" | where JobName == "runJob1" | where InvocationId == "daily" | fields RunID, ControllingRunID | uniq | format "(" "(" "OR" ")" "||" ")"       This gives:     ( ( ControllingRunID="12345" OR RunID="67890" ) )       What I would like is:     ( ( ControllingRunID="12345" OR RunID="67890" OR RunID="12345" OR ControllingRunID="67890") )       There could be many id pairs of run/controlling ID's and I want to search on any combination if possible.
Instructions for upgrading MongoDB are in the Splunk Admin Manual under "Migrate the KVStore storage engine" at https://docs.splunk.com/Documentation/Splunk/9.1.0/Admin/MigrateKVstore
I want to use the new search signature="test" in the below search. I don't want to add this new signature to the existing lookup.     | tstats summariesonly=true values (IDS_Attacks.action) ... See more...
I want to use the new search signature="test" in the below search. I don't want to add this new signature to the existing lookup.     | tstats summariesonly=true values (IDS_Attacks.action) as action from datamodel=Intrusion_Detection.IDS_Attacks by _time, IDS_Attacks.src, IDS_Attacks.dest, IDS_Attacks.signature | `drop_dm_object_name(IDS_Attacks)` | lookup rq_subnet_zones Network as dest OUTPUTNEW Name, Location | lookup rq_subnet_zones Network as src OUTPUTNEW Name, Location | search NOT Name IN ("*Guest*","*Mobile*","*byod*","*visitors*","*phone*") | lookup rq_emergency_signature_iocs_v01 ioc as signature OUTPUTNEW last_seen | where isnotnull(last_seen) | dedup src | head 51  
Thanks so much! This worked for me. Not super related, but do you know if it's possible to display the IP addresses instead of longitude and latitude on the cluster map?
I need to upgrade MongoDB from 3.6 to 4.2 as part of the pre-upgrade process for Splunk 8.2.0 to 9.1.0. So far I have not found a link to a reference which explains how this is done in context of a ... See more...
I need to upgrade MongoDB from 3.6 to 4.2 as part of the pre-upgrade process for Splunk 8.2.0 to 9.1.0. So far I have not found a link to a reference which explains how this is done in context of a Splunk installation. Any clear recommendation is welcome.
Hello,  Below is a sample SPL that you can use for incidents that are already closed. |`mc_incidents` | search status_label="Closed" | spath input=sla path=sla_total_time output=sla_time | spath in... See more...
Hello,  Below is a sample SPL that you can use for incidents that are already closed. |`mc_incidents` | search status_label="Closed" | spath input=sla path=sla_total_time output=sla_time | spath input=sla path=sla_units output=sla_units | eval sla_seconds =if (sla_units='h', 3600, if(sla_units='d', 86400, if(sla_units='m', 60, 60))) | eval sla_seconds=sla_seconds*sla_total_time | eval time_taken=update_time - mc_create_time | eval sla_status= if(time_taken > sla_seconds, "not met", "met") | table display_id, sla_status, assignee, status_label Let us know if you have any questions. Mallikarjuna
That's strange because the tcpdump seemed to contain just SYN packets whereas "existing connection was forcibly closed" applies to... well, existing, already established connection. Unfortunately, i... See more...
That's strange because the tcpdump seemed to contain just SYN packets whereas "existing connection was forcibly closed" applies to... well, existing, already established connection. Unfortunately, it's hard to say what's going on on the network without access to said network. I've seen so many different strange cases in my life. The most annoying so far was when the connection would get reset in the middle. And _both_ sides would get RST packets. The customer insisted that there is nothing filtering the traffic. After some more pestering him it turned out that there was some IPS which didn't like the certificate and was issuing RST to both ends of the connection. So there can be many different reasons for this. Compare the contents of packet dump on both sides - maybe that will tell you something.  
the splunkd.log is from UF - my bad for erroneously writing "HF's splunkd.log" on the caption. The UF can't complete the 9997 to the HF despite all evidence (at network level).  - 9997 is allowed ... See more...
the splunkd.log is from UF - my bad for erroneously writing "HF's splunkd.log" on the caption. The UF can't complete the 9997 to the HF despite all evidence (at network level).  - 9997 is allowed - Firewall logs show traffic is allowed - Other UFs with same IP subnet can do the 9997 no problem (e.g. all UFs: 10.68.0.0/16, dest HF: 10.68.2.2:9997)    Why other UFs can, e,g. 10.68.10.10, 11, 12, 13, 14, 15 and many more ---> 10.68.2.2:9997 == OK but this particular one 10.68.10.16  ---> 10.68.2.2:9997 == results to "An existing connection was forcibly closed by the remote host." and "The TCP output processor has paused the data flow. Forwarding to host_dest=10.68.2.2"
As far as I remember the Add-on we got from splunkbase (but I admit, it was some 4 years ago or something like that) wouldn't parse some fields properly. We ended up fixing the transforms by hand.