All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can't use lookup on an index, only a lookup table. I think this should work and won't have the limitations of the subsearch ((`cim_Authentication_indexes`) tag=authentication NOT (action=succes... See more...
You can't use lookup on an index, only a lookup table. I think this should work and won't have the limitations of the subsearch ((`cim_Authentication_indexes`) tag=authentication NOT (action=success user=*$)) OR (index="tml_it-mandiant_ti" type=ipv4) | eval origin=if(index="tml_it-mandiant_ti", "mandiant", "auth") | eval IP_Addr = coalesce(value, dest, dst, Ip, source_ip, src_ip, src) | stats dc(origin) as origins by IP_Addr | where origins=2 so, it sets a new field 'origin' to be where the IP address is coming from and if the event is from the tml_it-mandiant_ti index., IP_Addr will be the value, otherwise it will be the other IP address from your original coalesce. Then just stats and count the number of origins you find. You need it to be 2, indicating the IP address is in both indexes.  
If the comment field is always the same for the rule, then just add the comment to the top command index=notable | search status_label=Closed | top limit=5 rule_title comment
We have a case of a delay of an hour for a certain index that happened last week, while the indexing delays are normally up to half a minute. I'm struggling with the parameters for the MLTK to captur... See more...
We have a case of a delay of an hour for a certain index that happened last week, while the indexing delays are normally up to half a minute. I'm struggling with the parameters for the MLTK to capture these specific cases as outliers. Any ideas how to set it up correctly? It’s the tolerance that seems to be affected by the spike itself.
Hi all, today I tried to reinstall Splunk but this doesn't solve the issue. Then I uninstalled Splunk with losing all data. So I installed Splunk again but then all settings were lost. I restored t... See more...
Hi all, today I tried to reinstall Splunk but this doesn't solve the issue. Then I uninstalled Splunk with losing all data. So I installed Splunk again but then all settings were lost. I restored the data, apps and settings from my backup. After some issues with the certificates and the admin password I was able to get Splunk running without issues. But the data inputs weren't configured. So I restored the inputs.conf of the search app where the data inputs are configured. After restarting Splunk the issue came back again and for my surprise there was no login as admin anymore. Before I had to login with the admin user after each restart of Splunk. Deleting the inputs.conf doesn't help. So I have now researched for many hours without resolving the issue and I've lost data of a day :-(. Can anyone assist, please? Thank You.
Hello! I need some help from splunkers!!!   I'm using the search index=notable | search status_label=Closed | top limit=5 rule_title in the Splunk Enterprise Security, to list top 10 rule_title val... See more...
Hello! I need some help from splunkers!!!   I'm using the search index=notable | search status_label=Closed | top limit=5 rule_title in the Splunk Enterprise Security, to list top 10 rule_title values.   But i need to bring the field "comment" of each rule_title in the table.   Can please help me?   Tks!!!
Thank you for your quick response. Your query makes sense for me. But, it doesn't fully work. I think we should change the whole search. The function I used "coalesce" collects the first field value... See more...
Thank you for your quick response. Your query makes sense for me. But, it doesn't fully work. I think we should change the whole search. The function I used "coalesce" collects the first field value. If the dest value is not null, it only collects the dest, not collecting the rest of the values like src, src_ip. As far as I researched, I think I should use lookup and OUTPUT function to matches all the IPs(src, dest,..) with lookup index's IPs.
If I understand the use case, you can achieve the goal using a subsearch. (`cim_Authentication_indexes`) tag=authentication NOT (action=success user=*$) | table dest, dst, Ip, source_ip, src_ip, src... See more...
If I understand the use case, you can achieve the goal using a subsearch. (`cim_Authentication_indexes`) tag=authentication NOT (action=success user=*$) | table dest, dst, Ip, source_ip, src_ip, src | eval IP_Addr = coalesce(dest, dst, Ip, source_ip, src_ip, src) | search [search index="tml_it-mandiant_ti" type=ipv4 | return 10000 IP_Addr=value] | stats count by IP_Addr | where count >= 1 The subsearch will return a list of up 10,000 IP addresses in the form (IP_Addr=1.2.3.4 OR IP_Addr = 2.3.4.5 OR ...) which the search command will use to filter results from cim_Authentication_indexes.  The key thing is make sure the field name returned from the subsearch exists in the data from the main search (in the example, IP_Addr rather than value).
There are some values of IP addresses from `cim_Authentication_indexes`. This index is for look up. I want to make if the IP addresses from `cim_Authentication_indexes` are in the second ... See more...
There are some values of IP addresses from `cim_Authentication_indexes`. This index is for look up. I want to make if the IP addresses from `cim_Authentication_indexes` are in the second lookup index. I tried making some query but it quite something wrong.  (`cim_Authentication_indexes`) tag=authentication NOT (action=success user=*$) | table dest, dst, Ip, source_ip, src_ip, src | eval IP_Addr = coalesce(dest, dst, Ip, source_ip, src_ip, src) | append [search index="tml_it-mandiant_ti" type=ipv4 | table value] | stats count by IP_Addr, value | where count >= 1 Please correct this and help me out. Thanks.
Hello @gcusello  Thanks a lot for the swift response. the query gives the most of info I was looking for. However, it contains multiple entries for single index. Lets say If Index A has 3 sourcetype... See more...
Hello @gcusello  Thanks a lot for the swift response. the query gives the most of info I was looking for. However, it contains multiple entries for single index. Lets say If Index A has 3 sourcetypes, it appears in 3 rows.  Can we group them in a single row?  e.g: _time index sourcetype Vol_GB percentage 17th Sep Main  st1 st2 st3 100G 10.00%
Hi @Yashvik, please try this search: index=_internal source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) |... See more...
Hi @Yashvik, please try this search: index=_internal source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | bin span=1d _time | stats sum(b) AS volumeB by _time idx st | eval volumeB=round(volumeB/1024/1024/1024,2) | sort 20 -volumeB Ciao. Giuseppe
| fillnull value=0 Total_number_of_exported_profiles Total_number_of_exported_records | eval total = Total_number_of_exported_profiles + Total_number_of_exported_records
Hello All, I need to identify the top log sources which are sending large data to Splunk. Tried Licence master dashboard which isn't helping much.  My requirement is to create a table which contain... See more...
Hello All, I need to identify the top log sources which are sending large data to Splunk. Tried Licence master dashboard which isn't helping much.  My requirement is to create a table which contains following fields. e.g: sourcetype, vol_GB, index, percentage.
Looks like the final sum is not calculated when one of the results is empty. If both are available then the total is populated correctly. In my case either one of them is present. Any idea how to cal... See more...
Looks like the final sum is not calculated when one of the results is empty. If both are available then the total is populated correctly. In my case either one of them is present. Any idea how to calculate sum in this case? | eval total = Total_number_of_exported_profiles + Total_number_of_exported_records
I've created app action 'my_action_name' which results I can collect in playbook just fine. phantom.collect2(container=container, datapath=["my_action_name:action_result.data"], action results=resul... See more...
I've created app action 'my_action_name' which results I can collect in playbook just fine. phantom.collect2(container=container, datapath=["my_action_name:action_result.data"], action results=results) but I don't see action_result.data datapath neither in app documentation nor I can pick it up in VPE . I have only 'status' and 'message' available
If index=test_index sourcetype="test_source" className=export | stats sum(message.totalExportedProfileCounter) as Total_number_of_exported_profiles give you a result, and index=test_index sourcet... See more...
If index=test_index sourcetype="test_source" className=export | stats sum(message.totalExportedProfileCounter) as Total_number_of_exported_profiles give you a result, and index=test_index sourcetype="test_source" className=export | stats sum(message.exportedRecords) as Total_number_of_exported_records also gives you a result, then index=test_index sourcetype="test_source" className=export | stats sum(message.totalExportedProfileCounter) as Total_number_of_exported_profiles sum(message.exportedRecords) as Total_number_of_exported_records should give you two results which can be added together. Please recheck your searches.
This is not helping. Total_number_of_exported_profiles or Total_number_of_exported_records is showing up but not sum of them. See below screenshot.    
Yes, what you describe is allowed as long as the text after the '=' is a valid regular expression.
Is this what you mean? index=test_index sourcetype="test_source" className=export | stats sum(message.totalExportedProfileCounter) as Total_number_of_exported_profiles sum(message.exportedRecords) ... See more...
Is this what you mean? index=test_index sourcetype="test_source" className=export | stats sum(message.totalExportedProfileCounter) as Total_number_of_exported_profiles sum(message.exportedRecords) as Total_number_of_exported_records | eval total = Total_number_of_exported_profiles + Total_number_of_exported_records
@ITWhisperer Thanks for the reply, I tried below individually for getting sum of all records for each event type index=test_index sourcetype="test_source"  className=export | stats sum(message.tota... See more...
@ITWhisperer Thanks for the reply, I tried below individually for getting sum of all records for each event type index=test_index sourcetype="test_source"  className=export | stats sum(message.totalExportedProfileCounter) as Total_number_of_exported_profiles  index=test_index sourcetype="test_source"  className=export | stats sum(message.exportedRecords) as Total_number_of_exported_profiles    Above queries run just fine by themselves but I am more interested to add both these results into one. Also the common field that you were asking for can be message.type=export_job which is available in both events.
I got the following errors in my Splunk Error Logs: Init failed, unable to subscribe to Windows Event Log channel Microsoft-Windows-Sysmon/Operational: errorCode=5 The UniversalForwarder is insta... See more...
I got the following errors in my Splunk Error Logs: Init failed, unable to subscribe to Windows Event Log channel Microsoft-Windows-Sysmon/Operational: errorCode=5 The UniversalForwarder is installed on a Windows 10 Desktop (not part of a Doamin). I can see Sysmon logging in the eventlog viewer and I can forward the System and Security logs but not the Sysmon logs. What do I overlook here? inputs.conf:   [WinEventLog://Security] disabled = 0 [WinEventLog://System] disabled = 0 [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = 0