All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If I understand the use case, you can achieve the goal using a subsearch. (`cim_Authentication_indexes`) tag=authentication NOT (action=success user=*$) | table dest, dst, Ip, source_ip, src_ip, src... See more...
If I understand the use case, you can achieve the goal using a subsearch. (`cim_Authentication_indexes`) tag=authentication NOT (action=success user=*$) | table dest, dst, Ip, source_ip, src_ip, src | eval IP_Addr = coalesce(dest, dst, Ip, source_ip, src_ip, src) | search [search index="tml_it-mandiant_ti" type=ipv4 | return 10000 IP_Addr=value] | stats count by IP_Addr | where count >= 1 The subsearch will return a list of up 10,000 IP addresses in the form (IP_Addr=1.2.3.4 OR IP_Addr = 2.3.4.5 OR ...) which the search command will use to filter results from cim_Authentication_indexes.  The key thing is make sure the field name returned from the subsearch exists in the data from the main search (in the example, IP_Addr rather than value).
There are some values of IP addresses from `cim_Authentication_indexes`. This index is for look up. I want to make if the IP addresses from `cim_Authentication_indexes` are in the second ... See more...
There are some values of IP addresses from `cim_Authentication_indexes`. This index is for look up. I want to make if the IP addresses from `cim_Authentication_indexes` are in the second lookup index. I tried making some query but it quite something wrong.  (`cim_Authentication_indexes`) tag=authentication NOT (action=success user=*$) | table dest, dst, Ip, source_ip, src_ip, src | eval IP_Addr = coalesce(dest, dst, Ip, source_ip, src_ip, src) | append [search index="tml_it-mandiant_ti" type=ipv4 | table value] | stats count by IP_Addr, value | where count >= 1 Please correct this and help me out. Thanks.
Hello @gcusello  Thanks a lot for the swift response. the query gives the most of info I was looking for. However, it contains multiple entries for single index. Lets say If Index A has 3 sourcetype... See more...
Hello @gcusello  Thanks a lot for the swift response. the query gives the most of info I was looking for. However, it contains multiple entries for single index. Lets say If Index A has 3 sourcetypes, it appears in 3 rows.  Can we group them in a single row?  e.g: _time index sourcetype Vol_GB percentage 17th Sep Main  st1 st2 st3 100G 10.00%
Hi @Yashvik, please try this search: index=_internal source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) |... See more...
Hi @Yashvik, please try this search: index=_internal source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | bin span=1d _time | stats sum(b) AS volumeB by _time idx st | eval volumeB=round(volumeB/1024/1024/1024,2) | sort 20 -volumeB Ciao. Giuseppe
| fillnull value=0 Total_number_of_exported_profiles Total_number_of_exported_records | eval total = Total_number_of_exported_profiles + Total_number_of_exported_records
Hello All, I need to identify the top log sources which are sending large data to Splunk. Tried Licence master dashboard which isn't helping much.  My requirement is to create a table which contain... See more...
Hello All, I need to identify the top log sources which are sending large data to Splunk. Tried Licence master dashboard which isn't helping much.  My requirement is to create a table which contains following fields. e.g: sourcetype, vol_GB, index, percentage.
Looks like the final sum is not calculated when one of the results is empty. If both are available then the total is populated correctly. In my case either one of them is present. Any idea how to cal... See more...
Looks like the final sum is not calculated when one of the results is empty. If both are available then the total is populated correctly. In my case either one of them is present. Any idea how to calculate sum in this case? | eval total = Total_number_of_exported_profiles + Total_number_of_exported_records
I've created app action 'my_action_name' which results I can collect in playbook just fine. phantom.collect2(container=container, datapath=["my_action_name:action_result.data"], action results=resul... See more...
I've created app action 'my_action_name' which results I can collect in playbook just fine. phantom.collect2(container=container, datapath=["my_action_name:action_result.data"], action results=results) but I don't see action_result.data datapath neither in app documentation nor I can pick it up in VPE . I have only 'status' and 'message' available
If index=test_index sourcetype="test_source" className=export | stats sum(message.totalExportedProfileCounter) as Total_number_of_exported_profiles give you a result, and index=test_index sourcet... See more...
If index=test_index sourcetype="test_source" className=export | stats sum(message.totalExportedProfileCounter) as Total_number_of_exported_profiles give you a result, and index=test_index sourcetype="test_source" className=export | stats sum(message.exportedRecords) as Total_number_of_exported_records also gives you a result, then index=test_index sourcetype="test_source" className=export | stats sum(message.totalExportedProfileCounter) as Total_number_of_exported_profiles sum(message.exportedRecords) as Total_number_of_exported_records should give you two results which can be added together. Please recheck your searches.
This is not helping. Total_number_of_exported_profiles or Total_number_of_exported_records is showing up but not sum of them. See below screenshot.    
Yes, what you describe is allowed as long as the text after the '=' is a valid regular expression.
Is this what you mean? index=test_index sourcetype="test_source" className=export | stats sum(message.totalExportedProfileCounter) as Total_number_of_exported_profiles sum(message.exportedRecords) ... See more...
Is this what you mean? index=test_index sourcetype="test_source" className=export | stats sum(message.totalExportedProfileCounter) as Total_number_of_exported_profiles sum(message.exportedRecords) as Total_number_of_exported_records | eval total = Total_number_of_exported_profiles + Total_number_of_exported_records
@ITWhisperer Thanks for the reply, I tried below individually for getting sum of all records for each event type index=test_index sourcetype="test_source"  className=export | stats sum(message.tota... See more...
@ITWhisperer Thanks for the reply, I tried below individually for getting sum of all records for each event type index=test_index sourcetype="test_source"  className=export | stats sum(message.totalExportedProfileCounter) as Total_number_of_exported_profiles  index=test_index sourcetype="test_source"  className=export | stats sum(message.exportedRecords) as Total_number_of_exported_profiles    Above queries run just fine by themselves but I am more interested to add both these results into one. Also the common field that you were asking for can be message.type=export_job which is available in both events.
I got the following errors in my Splunk Error Logs: Init failed, unable to subscribe to Windows Event Log channel Microsoft-Windows-Sysmon/Operational: errorCode=5 The UniversalForwarder is insta... See more...
I got the following errors in my Splunk Error Logs: Init failed, unable to subscribe to Windows Event Log channel Microsoft-Windows-Sysmon/Operational: errorCode=5 The UniversalForwarder is installed on a Windows 10 Desktop (not part of a Doamin). I can see Sysmon logging in the eventlog viewer and I can forward the System and Security logs but not the Sysmon logs. What do I overlook here? inputs.conf:   [WinEventLog://Security] disabled = 0 [WinEventLog://System] disabled = 0 [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = 0  
Timestamp extraction is done before transforms are processed. Consider setting props based on source rather than sourcetype. [source::object_rolemap_audit.csv] sourcetype = awss3:object_rolemap_aud... See more...
Timestamp extraction is done before transforms are processed. Consider setting props based on source rather than sourcetype. [source::object_rolemap_audit.csv] sourcetype = awss3:object_rolemap_audit [source::authz-audit.csv] sourcetype = awss3:authz_audit [aws:s3:csv] LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = true FIELD_DELIMITER = , HEADER_FIELD_DELIMITER = , TRUNCATE = 20000 [awss3:object_rolemap_audit] TIME_FORMAT=%d %b %Y %H:%M:%S LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = false BREAK_ONLY_BEFORE_DATE = true FIELD_DELIMITER = , HEADER_FIELD_DELIMITER = , FIELD_QUOTE = " INDEXED_EXTRACTIONS = CSV HEADER_FIELD_LINE_NUMBER = 1 [awss3:authz_audit] TIME_FORMAT=%Y-%m-%d %H:%M:%S,%3Q FIELD_DELIMITER = , HEADER_FIELD_DELIMITER = , FIELD_QUOTE = " INDEXED_EXTRACTIONS = CSV HEADER_FIELD_LINE_NUMBER = 1
"smartstore" and "AWS S3" are not the same thing.  SmartStore (S2) is a Splunk feature that separates storage from compute.  It relies on storage providers that follow the S3 standard, but does not h... See more...
"smartstore" and "AWS S3" are not the same thing.  SmartStore (S2) is a Splunk feature that separates storage from compute.  It relies on storage providers that follow the S3 standard, but does not have to use AWS.  AWS S3 is just one provider of on-line storage, but it is not suitable for storing live Splunk indexes. The reason S2 gets away with using AWS S3 for storing indexes is because it keeps a cache of indexed data local to the indexers.  It's this cache that serves search requests; any data not in the cache has to be fetched from AWS before it can be searched. Note that S2 stores ALL warm data.  There is no cold data with S2. All warm data remains where it is until it is rolled to cold or frozen.  There is no way for warm buckets to reside in one place for some period of time and some place else for another period of time. Freezing data either deletes it or moves it to an archive.  Splunk cannot search frozen data. To keep data local for 45 days and remote for 45 days would mean having a hot/warm period of 45 days and a cold period of 45 days.  Note that each period is measured based on the age of the newest event in the bucket rather than when the data moved to each respective storage tier. Data moves from warm to cold based on size rather than time so you would have to configure the buckets so they hold a day of data and then size the volume so it hold 45 days of buckets.  Configure the cold volume to be remote.  Splunk will move the buckets to the cold volume as the warm volume fills up.  Data will remain in cold (remote) storage until it expires (frozenTimePeriodInSecs=3110400 (90 days)).
@PickleRick There are multiple eventTyes in my logs. If i include all eventType then I am getting lot of results. Pls assist.
You're omitting the important part - the "other eventtype search".
Also, you don't have to escape slashes in the regex.
Regardless of splitting the event, there is no "merged" cells in Splunk. So you can't visualize it this way.