All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

What happened to the date_wday, date_hour,  and the others?  Am I going nuts, waking from a dream where they used to be there all the _time? looks like date_mday and date_month are still there... ... See more...
What happened to the date_wday, date_hour,  and the others?  Am I going nuts, waking from a dream where they used to be there all the _time? looks like date_mday and date_month are still there... on 8.2.6
Hi Team, I have two searches one is normal search and another in lookup, both returns the count. Both always return a single value so I used appendcols. My end goal is to perform eval operation on ... See more...
Hi Team, I have two searches one is normal search and another in lookup, both returns the count. Both always return a single value so I used appendcols. My end goal is to perform eval operation on them as shown below. Query index=data | stats dc(number) as X_data | appendcols [| inputlookup data.csv | stats dc(number) as Y_data] | eval result =X_data/Y_data Since the outer search results returns fast in this case X_data, the eval is evaluating the expression before outer search is complete Y_data.  Example X_Data =237 Y_data =71 Expected result = 3.29 Actual result = 1.00 How do I fix this issue
Has anyone found a way to look at metrics for stored procedures that are not in the Top 200 queries.  I have been able to narrow the timeframe down to minutes and find some, but that is very time con... See more...
Has anyone found a way to look at metrics for stored procedures that are not in the Top 200 queries.  I have been able to narrow the timeframe down to minutes and find some, but that is very time consuming and painful.  Is there an API that would allow us to pull metrics based on stored procedure name.  Or is there a way to increase the number of TOP queries to like 300? Any information related to accessing these low executed stored procedures would be appreciated.
  How to filter a query?
I've been working on a project with JSON in the event where Tags are stored similar to this... { "Name": "example", "Tags": [ {"Key": "Building", "Value": "1"}, {"Key": "Floor", "Value": "2"}... See more...
I've been working on a project with JSON in the event where Tags are stored similar to this... { "Name": "example", "Tags": [ {"Key": "Building", "Value": "1"}, {"Key": "Floor", "Value": "2"}, {"Key": "Color", "Value": "Red"} ] } The default extract from spath provided the Tags{}.Key and Tags{}.Value fields which were pretty much useless as-is.  What I wanted was for each tag to be a field on the event so that you could use them in a search, ex. Building=1 AND Color=Red.  But the number of tags varies and the same value could appear in multiple tags (i.e. Building=1 AND Floor=1).    Here's what I came up with so far... I'm curious if anyone has a better suggestion. | rename Tags{}.Key as Key, Tags{}.Value as Value | eval zip=mvzip(Key,Value, ":") | mvexpand zip |rex field=zip mode=sed "s/$/\"}/g" |rex field=zip mode=sed "s/^/{\"tag./g"| rex field=zip mode=sed "s/:/\": \"/g" | spath input=zip | transaction Name This approach basically uses mvzip and mvexpand to pull apart the Tags, then uses rex with sed to rebuild a JSON object to pass back through spath.  It seems pretty complex, but I just can't see a better way to do it. I'm interested to hear if anyone has a better suggestion?
Hello, We currently utilize the Windows Defender ATP v 3.6.0 app in our Splunk SOAR Cloud instance.  I've discovered that the 'run query' action utilizes an outdated advancedqueries api endpoint th... See more...
Hello, We currently utilize the Windows Defender ATP v 3.6.0 app in our Splunk SOAR Cloud instance.  I've discovered that the 'run query' action utilizes an outdated advancedqueries api endpoint that does not expose all of the tables available in Advanced Hunting. I'd like to update the 'run query' action to use the advancedhunting api endpoint that has the proper tables exposed.  I'm familiar with the code and where this needs to be updated, but not on how to create a custom version of this app. What is the proper way to customize the app and install it in our SOAR cloud?
Hello Splunkers, I needed help regarding how to monitor private storage s3 endpoint? We have explored the Splunk Add-on for ECS but it looks like it’s for monitoring ECS systems, thus it’s asking ... See more...
Hello Splunkers, I needed help regarding how to monitor private storage s3 endpoint? We have explored the Splunk Add-on for ECS but it looks like it’s for monitoring ECS systems, thus it’s asking for username/password in order to gain access to ECS management capabilities for monitoring purposes. What we are trying to accomplish, is to get data from ECS S3 private endpoint using username and secret key but we are having hard to find a solution around this. Do any of you guys have any idea on how we can go with this? Thanks in Advance.
Hello! We are enriching some data and want to be able to then search the results matched from the lookup table.  It works and we can search one of the lookup tables, but the other doesn't return a... See more...
Hello! We are enriching some data and want to be able to then search the results matched from the lookup table.  It works and we can search one of the lookup tables, but the other doesn't return any results, although they are there.... Here is the base search:     index="allhosts" ip=* | stats count by hostname, ip, domain | eval hostname=upper(hostname) | rex field=hostname "^(?P<hostcode>..)" | lookup hostcode.csv hostcode AS hostcode | lookup applications.csv ipaddress AS ip | lookup vlan.csv Subnet AS ip           This works great, I can see a table with all hosts, their first two letters (naming convention) and then matched with their application and vlan...  hostname ip domain Application  hostcode VLAN ABCD 10.1.1.1 Domain1 Application1 AB VLAN1 CDEF 10.1.1.2 Domain 1 Application2 CD VLAN2 When I add  | search VLAN=VLAN1, it shows only the first row.... same when I add VLAN2 BUT When if I add | search Application=Application1, no results.  If I add | search Application=*, no results.... Any ideas why this particular field will not return results?! Thanks!
Hi Team   I have a query where I am doing the TimeChart & % (not using the timechart and calculate the % in timechart line as this doesn't solve my purpose hence using it this say) The query i... See more...
Hi Team   I have a query where I am doing the TimeChart & % (not using the timechart and calculate the % in timechart line as this doesn't solve my purpose hence using it this say) The query is working fine however it shows all the data on field and I want to have that field only show top 10  by volume or count Query  index=xyz (catcode="*") (prodid="1") (prodcat="*") success="*" | bucket _time span="1d" | eval TheError=if(success="false" AND Error_Value like "%%",count,0) | eval Success=if(success="true",count,0) | stats sum(TheError) as "Failed", sum(Success) as "Passed", sum(count) as Total by _time, catcode | eval Failed_Percent=round((Failed/Total)*100,2) | fields _time, catcode, Failed_Percent | xyseries _time, catcode, Failed_Percent I don't want to do the 'eventstats' because it will count all on prodid level and not at catcode level hence this query This query counts all false with error on catcode....and count all attempts on individual catcode, then calculate the % with event stats the total count will be not at catcode but all prodid count i.e. all catcode's total attempt's count   Thanks in advance  
Went through the Knowledge Base looking for anything that I might have missed concerning Health Rule violations and policies. My goal is to have the ability to send an email notification if HTTP erro... See more...
Went through the Knowledge Base looking for anything that I might have missed concerning Health Rule violations and policies. My goal is to have the ability to send an email notification if HTTP errors occur.  I can generate http errors using my device but despite the occurrences no health rule is violated and no action is taken. Any help would be appreciated. More settings screenshots are available upon request.
Looking to change Navigation menu background color based on panel search criteria. Here idea is i don't want to go on each dashboard and see what all alerts are there. Just to see Green , Yellow ... See more...
Looking to change Navigation menu background color based on panel search criteria. Here idea is i don't want to go on each dashboard and see what all alerts are there. Just to see Green , Yellow and Red color change dynamically on Navigation menu then i will go to specific dashboard
Because of licensing reasons, I want to stop indexing these events (as they make up almost 50% of the index) index=cisco dest_port=53 So basically DNS requests. Is it possible for this specific i... See more...
Because of licensing reasons, I want to stop indexing these events (as they make up almost 50% of the index) index=cisco dest_port=53 So basically DNS requests. Is it possible for this specific index=cisco to stop indexing these logs where dest_port=53? I cant do it from the cisco firewall itself. I googled a bit and the consensus seems to be sending the logs to NULLQUEUE, and modify props.conf & transform.conf. But what I'm struggling with is where are these files? My Splunk architecture is 2 Search Heads in a cluster and 1 License Manager server. Where to modify these files? On both Search heads?
An Example: We have defined two malicious urls in the local_http_intel This triggers false positives in the Threat Activity of ES on the valid and safe domain of github.com How can we ... See more...
An Example: We have defined two malicious urls in the local_http_intel This triggers false positives in the Threat Activity of ES on the valid and safe domain of github.com How can we prevent / fix this?
As the titles suggests, we are planning on migrating our heavy forwarder to a separate VLAN. However this is the first time I've done anything like this, and I was wondering what things I need to con... See more...
As the titles suggests, we are planning on migrating our heavy forwarder to a separate VLAN. However this is the first time I've done anything like this, and I was wondering what things I need to consider. If anyone can help that would be great
Hello All,    I currently have 6 indexers. Three of them are being forwarded data from outside sources. And the other three were added much later. I have a SF 1 and RF 1 (I understand this is not... See more...
Hello All,    I currently have 6 indexers. Three of them are being forwarded data from outside sources. And the other three were added much later. I have a SF 1 and RF 1 (I understand this is not optimal, but due to space constraints that was the best I could do).  My main question is, why isn't a data rebalance rebalancing primary buckets? Even with a RF of 1, the three new indexers seem to only receive replicated buckets. Which is rather confusing.    I have tried using this: Rebalance the indexer cluster - Splunk Documentation curl -k -u admin:pass --request POST \ https://localhost:8089/services/cluster/manager/control/control/rebalance_primaries Nothing would occur after that.   
  I have a field 'JOB_STATUS' with the values as 'STARTING' and 'SUCCESS'.  With this I have to calculate the runtime. runtime=STARTING-SUCCESS Can you please let me know how to do this
Hello all, I have a problem with duplicated rule name in Incident Review multiselect box. In Setting -> searches.. I have only 1 search. In Content management only 1 too. I also checked correlations... See more...
Hello all, I have a problem with duplicated rule name in Incident Review multiselect box. In Setting -> searches.. I have only 1 search. In Content management only 1 too. I also checked correlationsearches_lookup for dublicates. How does this multi select work(from what lookup or search it takes data). And how I can fix my problem with duplicated names
Hello  what is the expected log size for FMC log ingestion ? For example in 180 days retention  I am using Splunk for security operations and wanted to know what kind of logs are relevant to th... See more...
Hello  what is the expected log size for FMC log ingestion ? For example in 180 days retention  I am using Splunk for security operations and wanted to know what kind of logs are relevant to this purpose?   Thank you, Adrian
Hello community I am trying to set up a search to catch any succesfull logon after x failed within y minutes. However, I am strugling to see how I would build this search. Searching for succesful e... See more...
Hello community I am trying to set up a search to catch any succesfull logon after x failed within y minutes. However, I am strugling to see how I would build this search. Searching for succesful events is easy, index=<index> status="logged in" As well is finding unsuccesful events index=<index> message="Invalid credentials." status="nog logged"  I figured I could do a count by IP-address and/or username for the failed events, but how do I connect the two and add time? I am assuming this should be some combination of "and"/eval/if and where? Just to get a sense of what I am thinking: index=<index> ... ip-adress WHERE status="logged in" AND (index=<index> message="Invalid credentials." status="nog logged" >3 WHERE delta_time<10min) What I would like is an output with any ip-adress where any successful logon was perceeded by say 3 failed logons within 10 minutes. I am assuming this will be a large and complex search, at least for me, so any suggestions would really be appreciated. Best regards
I am getting the output time but i want to round the  time value for next 10th minute. the excepted output is the rounded_time. can anyone please guide me how to write a query for this ... See more...
I am getting the output time but i want to round the  time value for next 10th minute. the excepted output is the rounded_time. can anyone please guide me how to write a query for this File time rounded time 07/19/2022 12:16:48.303 07/19/2022 12:20:00.000 07/19/2022 12:11:36.660 07/19/2022 12:20:00.000 07/19/2022 09:33:48.091 07/19/2022 09:40:00.000 07/19/2022 00:30:24.749 07/19/2022 00:40:00.000