All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All, It would be great help if anyone help me figure out this. App is deployed in the UFs to receive such logs in splunk under the index wineventlog. I can see 2 different sourcetypes (xmlwineve... See more...
Hi All, It would be great help if anyone help me figure out this. App is deployed in the UFs to receive such logs in splunk under the index wineventlog. I can see 2 different sourcetypes (xmlwineventlog, XmlWinEventLog) under the wineventlog index sourcetype : XmlWinEventLog (source : "XmlWinEventLog:Application", "XmlWinEventLog:Security", "XmlWinEventLog:System") sourcetype : xmlwineventlog (source : "WinEventLog:Microsoft-Windows-Sysmon/Operational", "WinEventLog:Microsoft-Windows-Windows Defender/Operational") Please help me where should I need to check these exact difference of two distinct case sensitive sourcetypes. Thanks  
Hi @Vivin.D, Can you share what the error is? 
| makeresults | eval field2count = split("n,y,n,n,y,n,n,y,n,n,n,y",",") | mvexpand field2count | stats count(eval(field2count="n")) as n count(eval(field2count="y")) as y count(field2count) as total... See more...
| makeresults | eval field2count = split("n,y,n,n,y,n,n,y,n,n,n,y",",") | mvexpand field2count | stats count(eval(field2count="n")) as n count(eval(field2count="y")) as y count(field2count) as total | eval n = round(n/total,3) *100, y = round(y/total,3) *100 | fields - total | transpose | rename column as field2count, "row 1" as total
if there is another key, serial_number, how could I add this to the chart? rex field=message "ErrorCode\((?<error_code>[^\)]+)"| search error_code=*| chart values(error_code), values(serial_n... See more...
if there is another key, serial_number, how could I add this to the chart? rex field=message "ErrorCode\((?<error_code>[^\)]+)"| search error_code=*| chart values(error_code), values(serial_number) by _time I would like to show the error code, the time , and the serial number associated with the error code 
Hello everyone, My problem is as follows: I need to install Splunk Soar on my home laboratory. Now seeing that the versions are compatible with Centos7/8 which are deprecated, the moment I launch ... See more...
Hello everyone, My problem is as follows: I need to install Splunk Soar on my home laboratory. Now seeing that the versions are compatible with Centos7/8 which are deprecated, the moment I launch soar-installer or the soar-prepare-installer file, problems arise. Now since I have searched community and web but no luck. Is there a possibility to install SOAR on ubuntu? Also it is true that Amazon Linux 2 and RHEL is recommended, but is it possible that there is no way to install SOAR on other linux distribution? Thank you, biwanari
You are correct in assuming this is JSON data, message key is the top node, and your rex input works nicely. However, when I try to chart this the contains almost entirely empty error_code fields, s... See more...
You are correct in assuming this is JSON data, message key is the top node, and your rex input works nicely. However, when I try to chart this the contains almost entirely empty error_code fields, some insight on how to remove the empty error code fields and create a relevant chart would be appreciated spath| rex field=message "ErrorCode\((?<error_code>[^\)]+)"| chart values(error_code) by _time  
Apps are empty either because the data they need isn't present or because the data can't be found.  You've shown the former is not true so it must be the latter. Confirm the data is in the index(es)... See more...
Apps are empty either because the data they need isn't present or because the data can't be found.  You've shown the former is not true so it must be the latter. Confirm the data is in the index(es) where Veeam expects to find it.  If Veeam uses a datamodel (I suspect it does) then your data must be tagged so it is found by the DM.  Look at the DM definition to see which indexes and tags it needs.
Morning, Splunkers! I've got a fun one today. I need to find the most resource efficient way (i.e., fastest way that won't have my IT guys calling me up and wanting to know why their ports are smoki... See more...
Morning, Splunkers! I've got a fun one today. I need to find the most resource efficient way (i.e., fastest way that won't have my IT guys calling me up and wanting to know why their ports are smoking) to return the unique values in one field that only have unique values in another field. For example, in the following table my search result needs to only return Value B; Values A and C will be thrown out, because they don't have a unique value in Field B. Field A Field B Value A Value A1 Value A Value A2 Value B Value B1 Value C Value C1 Value C Value C2 Value C Value C3   The big problem here is Field B can be any number of different values, so I can't query specifically on what those values may be. I have a solution for this, and it works, but it doesn't work "at scale" because I'm looking through literally billions of records to pull this information.  Here's what I'm already doing:   | tstats count where index=myindex by Field A Field B | stats values(Field B) as Field B by Field A | where mvcount(Field B)=1     This takes a few minutes if I'm pulling, say, over 15 minutes, and I need to pull 90 days, and I really don't want to have to do it 15 minutes at a time and stitch everything together afterward. I will if I have to, but there's got to be a better way to do what I'm trying to do that won't make the system flip me the bird and call it a day. Suggestions?
Yes, especially in distributed environments, the search head must be aware of the index.  No storage needs to be created, however.  The SH merely needs to know the index exists.
It's impossible to reingest all the data as they are collected since years. One of my first tasks is to check bucket's ids and make sure there ie no duplicates but I'm pretty sure there is not.
Hi,   I would like to merge two different index clusters. One has always been here and the other have been added after from an existing environment. Except for internal indexes, each cluster have ... See more...
Hi,   I would like to merge two different index clusters. One has always been here and the other have been added after from an existing environment. Except for internal indexes, each cluster have their own indexes. The "expiration scenario" is the last option we want because we would like to remove cluster A servers as they are too old.
Of course assuming you don't have any escaped quote in your error string. That's the problem with 1) Manipulating structured data with regexes 2) Sending structured data (json, xml) as part of an o... See more...
Of course assuming you don't have any escaped quote in your error string. That's the problem with 1) Manipulating structured data with regexes 2) Sending structured data (json, xml) as part of an otherwise unstructured event. So generally, it should work but be aware that there might be border cases where it will not capture whole message. This captures with possible escaped quotes: (?<msg>"errorMessage":".*?(?<!\\)")
You could try opening a support case to see if they will export your data.  Consider using the REST API to run queries (index=foo earliest=0 latest=now | table *) that return all the data from an in... See more...
You could try opening a support case to see if they will export your data.  Consider using the REST API to run queries (index=foo earliest=0 latest=now | table *) that return all the data from an index and then save that data in the desired format.
Hello @Xiangning.Mao ,Thanks for the details. It worked as expected. 
The underlabel option does not perform evaluations.  Do the eval in a separate statement. <search> <query>index=* EventCode=25753 | stats count(EventCode) as toto | append [| search index=* ... See more...
The underlabel option does not perform evaluations.  Do the eval in a separate statement. <search> <query>index=* EventCode=25753 | stats count(EventCode) as toto | append [| search index=* EventCode=* | stats count(EventCode) as toto2]</query> <earliest>-7d@h</earliest> <latest>now</latest> <done> <condition> <set token="NbHost">$result.toto$</set> <set token="NbHost2">$result.toto2$</set> <eval token="ratio">$NbHost$ / $NbHost2$</eval> </condition> </done> </search> <option name="drilldown">none</option> <option name="underLabel">$ratio$</option>
Hi Splunkers, I have a doubt about a specific Splunk Alert triggered actions: the log event one. From doc I can see, on the end: "You must also define the destination index on both the search head ... See more...
Hi Splunkers, I have a doubt about a specific Splunk Alert triggered actions: the log event one. From doc I can see, on the end: "You must also define the destination index on both the search head and the indexers. " Does it means that, even if I am in a distributed environments, I must created index used to save alerts on both Indexers and search heads?
To answer my own question This was a browser issue. Both the splunk REST API and Splunk Web must use https for the REST call to succeed. In my case, this means https://localhost:8000 for splunk web... See more...
To answer my own question This was a browser issue. Both the splunk REST API and Splunk Web must use https for the REST call to succeed. In my case, this means https://localhost:8000 for splunk web and https://localhost:8090 for the API  
sort truncates at 10k values - try something like this | sort 0 -clientip
@dkmcclory- It depends on what your API call does. If your API call collects data and ingest into Splunk, then use Input else use scheduled alert/report.
Are you sure you have searchable buckets from this site2 index in site1 and the other way around?  site_search_factor = origin:2,total:2 In this case a bucket originating in site2 will stay in site... See more...
Are you sure you have searchable buckets from this site2 index in site1 and the other way around?  site_search_factor = origin:2,total:2 In this case a bucket originating in site2 will stay in site2. So search will reach across intersite link for primaries since it has no searchable primaries in their own site.