All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am experiencing an interesting Issue. I am trying to filter for a specific value in a numeric field. Following statement works finde:         index="IndexA" | eval A.distance=tr... See more...
Hello, I am experiencing an interesting Issue. I am trying to filter for a specific value in a numeric field. Following statement works finde:         index="IndexA" | eval A.distance=trim('A.distance',"'") | eval A.distance='A.distance'/100 | search A.distance=1         If I am trying to replace the search with a where, I am getting the Error "Error in 'where' command: Type checking failed. The '==' operator received different types."         index="IndexA" | eval A.distance=trim('A.distance',"'") | eval A.distance='A.distance'/100 | where A.distance=1         Event Coverage if this value is 100% and all the values get for typeof() the result "Number". All of the values do not have a digit after the comma. We are using Splunk Enterprise 8.2.3.3 . Does someone know, why the where statement is yielding an error in this case? Thanks
<form script="search:tabs.js,custom_vizs:autodiscover.js, custom_vizs:dendrogram.js" stylesheet="search:da_service.css,search:tabs.css,custom_vizs:dendrogram.css" hideSplunkBar="true"> ------------... See more...
<form script="search:tabs.js,custom_vizs:autodiscover.js, custom_vizs:dendrogram.js" stylesheet="search:da_service.css,search:tabs.css,custom_vizs:dendrogram.css" hideSplunkBar="true"> --------------------------------------------------------------------------------------------------- common.js:1798 SyntaxError: Unexpected token , in JSON at position 49 at JSON.parse (<anonymous>) at _decodeOptions (dashboard_1.1.js:277) at dashboard_1.1.js:277 at Function._.each._.forEach (common.js:1798) at Function.ready._enableComponentDivs (dashboard_1.1.js:277) at dashboard_1.1.js:277 at Object.execCb (eval at module.exports (common.js:1117), <anonymous>:1658:33) at Module.check (eval at module.exports (common.js:1117), <anonymous>:869:55) at Module.eval (eval at module.exports (common.js:1117), <anonymous>:1121:34) at eval (eval at module.exports (common.js:1117), <anonymous>:132:23) --------------------------------------------------------------------------------------------------- Failed to load resource: net::ERR_CONNECTION_TIMED_OUT  https://e1345286.api.splkmobile.com/1.0/e1345286//0/1 --------------------------------------------------------------------------------------------------- Refused to apply style from 'https://*************:8000/en-US/static/@2e1fca123028.225/app/custom_vizs/dendrogram.css' because its MIME type ('text/html') is not a supported stylesheet MIME type, and strict MIME checking is enabled.
Hello As you can see, the 2 single panel are not correctly aligned is there a way to avoid this without changing the length of the title? thanks  
Hi all,  I am wondering if it is possible to store a hashed value for passwords which are required for some of the receivers? Some receivers, especially database related ones, will require userna... See more...
Hi all,  I am wondering if it is possible to store a hashed value for passwords which are required for some of the receivers? Some receivers, especially database related ones, will require username and passwords to connect and collect measurement data, and I do not feel safe saving passwords in plaintext in agent_config.yaml.   
I use nlp-text-analytics app for similarity between two strings but I get above error   when I run lines 1, 2, and 3, it gives a result of about 14,973,764 (3814*3926)   It is inter... See more...
I use nlp-text-analytics app for similarity between two strings but I get above error   when I run lines 1, 2, and 3, it gives a result of about 14,973,764 (3814*3926)   It is interesting that if I reduce the data, for example, the output of the third line is  13.348.400 (3400* 3926 )  , it does not give an error and the result of the fourth line is returned.   This means that the similarity command has problems with data above 13.5 million my splunk is 8.2 nlp-text-analytics 1.1.3 python_upgrade_readiness_app  4.0.2   
I have a lookup with IP addresses (CIDR), I need to find the intersection of IP addresses. There is a command in splunk called cidrtomatch. I need all fields where there is an intersection in the sam... See more...
I have a lookup with IP addresses (CIDR), I need to find the intersection of IP addresses. There is a command in splunk called cidrtomatch. I need all fields where there is an intersection in the same table to make a checkmark. In the field notes - equate to 1.
We want to create and update incident with some specific Severity level using custom command "snowincident". The available options do not have severity field which we can use (severity field is avail... See more...
We want to create and update incident with some specific Severity level using custom command "snowincident". The available options do not have severity field which we can use (severity field is available in snowevent custom command).
Hello everyone,  I want to be able to have  a dynamic timewrap option on my dashboard. Based on the user input (of specific time range and a time wrap variable), i want some graphs on the dashboard ... See more...
Hello everyone,  I want to be able to have  a dynamic timewrap option on my dashboard. Based on the user input (of specific time range and a time wrap variable), i want some graphs on the dashboard to plot the events from that entered time range and also the events from the day before/ week before, based on the timewrap variable. Is this doable?  I have attached some messy code; not sure if this is doable. Thank you for your advice! 
I need assistance with whitelisting as I can’t make it work.  I’m running the free trial version 9.0.0 of Splunk Enterprise. I have 1 Receiver (on a CentOS VM), and some Windows and CentOS systems (V... See more...
I need assistance with whitelisting as I can’t make it work.  I’m running the free trial version 9.0.0 of Splunk Enterprise. I have 1 Receiver (on a CentOS VM), and some Windows and CentOS systems (VM’s and physical devices) with the Universal Forwarder installed.  I’m getting data in from all my systems.  On the Windows systems I only need to see data from select Windows Security Log Events and would like to exclude all other log data/events.  I’ve read Splunk’s documentation about whitelisting and I guess I just don’t understand what I’m reading.  It doesn’t seem to be working as my license usage hasn’t decreased and/or I don’t know how to verify if it’s working. I created an inputs.conf file in the following location:  /etc/system/local/ on the Universal Forwarders and its content is: [WinEventLog://Security] whitelist=1100,1101,1102,4616,4624,4625,4634,4647,4648,4657,4704,4705,4719,4720,4722,4723,4724,4725,4726,4740,4767,4776,4777,4616 Is this correct? Do I have to put the statement disabled = 0 or is it implied? I haven’t configured anything through Splunk web, do I need to do that? Where do I save the inputs.conf file?  On the Receiver only, on the Universal Forwarders only, or on both? Do I need to include all the statements from the default inputs.conf file in my new one? Besides decreased license usage, is there a way to know if my whitelist is working? Thank you for any and all help.
What happened to the date_wday, date_hour,  and the others?  Am I going nuts, waking from a dream where they used to be there all the _time? looks like date_mday and date_month are still there... ... See more...
What happened to the date_wday, date_hour,  and the others?  Am I going nuts, waking from a dream where they used to be there all the _time? looks like date_mday and date_month are still there... on 8.2.6
Hi Team, I have two searches one is normal search and another in lookup, both returns the count. Both always return a single value so I used appendcols. My end goal is to perform eval operation on ... See more...
Hi Team, I have two searches one is normal search and another in lookup, both returns the count. Both always return a single value so I used appendcols. My end goal is to perform eval operation on them as shown below. Query index=data | stats dc(number) as X_data | appendcols [| inputlookup data.csv | stats dc(number) as Y_data] | eval result =X_data/Y_data Since the outer search results returns fast in this case X_data, the eval is evaluating the expression before outer search is complete Y_data.  Example X_Data =237 Y_data =71 Expected result = 3.29 Actual result = 1.00 How do I fix this issue
Has anyone found a way to look at metrics for stored procedures that are not in the Top 200 queries.  I have been able to narrow the timeframe down to minutes and find some, but that is very time con... See more...
Has anyone found a way to look at metrics for stored procedures that are not in the Top 200 queries.  I have been able to narrow the timeframe down to minutes and find some, but that is very time consuming and painful.  Is there an API that would allow us to pull metrics based on stored procedure name.  Or is there a way to increase the number of TOP queries to like 300? Any information related to accessing these low executed stored procedures would be appreciated.
  How to filter a query?
I've been working on a project with JSON in the event where Tags are stored similar to this... { "Name": "example", "Tags": [ {"Key": "Building", "Value": "1"}, {"Key": "Floor", "Value": "2"}... See more...
I've been working on a project with JSON in the event where Tags are stored similar to this... { "Name": "example", "Tags": [ {"Key": "Building", "Value": "1"}, {"Key": "Floor", "Value": "2"}, {"Key": "Color", "Value": "Red"} ] } The default extract from spath provided the Tags{}.Key and Tags{}.Value fields which were pretty much useless as-is.  What I wanted was for each tag to be a field on the event so that you could use them in a search, ex. Building=1 AND Color=Red.  But the number of tags varies and the same value could appear in multiple tags (i.e. Building=1 AND Floor=1).    Here's what I came up with so far... I'm curious if anyone has a better suggestion. | rename Tags{}.Key as Key, Tags{}.Value as Value | eval zip=mvzip(Key,Value, ":") | mvexpand zip |rex field=zip mode=sed "s/$/\"}/g" |rex field=zip mode=sed "s/^/{\"tag./g"| rex field=zip mode=sed "s/:/\": \"/g" | spath input=zip | transaction Name This approach basically uses mvzip and mvexpand to pull apart the Tags, then uses rex with sed to rebuild a JSON object to pass back through spath.  It seems pretty complex, but I just can't see a better way to do it. I'm interested to hear if anyone has a better suggestion?
Hello, We currently utilize the Windows Defender ATP v 3.6.0 app in our Splunk SOAR Cloud instance.  I've discovered that the 'run query' action utilizes an outdated advancedqueries api endpoint th... See more...
Hello, We currently utilize the Windows Defender ATP v 3.6.0 app in our Splunk SOAR Cloud instance.  I've discovered that the 'run query' action utilizes an outdated advancedqueries api endpoint that does not expose all of the tables available in Advanced Hunting. I'd like to update the 'run query' action to use the advancedhunting api endpoint that has the proper tables exposed.  I'm familiar with the code and where this needs to be updated, but not on how to create a custom version of this app. What is the proper way to customize the app and install it in our SOAR cloud?
Hello Splunkers, I needed help regarding how to monitor private storage s3 endpoint? We have explored the Splunk Add-on for ECS but it looks like it’s for monitoring ECS systems, thus it’s asking ... See more...
Hello Splunkers, I needed help regarding how to monitor private storage s3 endpoint? We have explored the Splunk Add-on for ECS but it looks like it’s for monitoring ECS systems, thus it’s asking for username/password in order to gain access to ECS management capabilities for monitoring purposes. What we are trying to accomplish, is to get data from ECS S3 private endpoint using username and secret key but we are having hard to find a solution around this. Do any of you guys have any idea on how we can go with this? Thanks in Advance.
Hello! We are enriching some data and want to be able to then search the results matched from the lookup table.  It works and we can search one of the lookup tables, but the other doesn't return a... See more...
Hello! We are enriching some data and want to be able to then search the results matched from the lookup table.  It works and we can search one of the lookup tables, but the other doesn't return any results, although they are there.... Here is the base search:     index="allhosts" ip=* | stats count by hostname, ip, domain | eval hostname=upper(hostname) | rex field=hostname "^(?P<hostcode>..)" | lookup hostcode.csv hostcode AS hostcode | lookup applications.csv ipaddress AS ip | lookup vlan.csv Subnet AS ip           This works great, I can see a table with all hosts, their first two letters (naming convention) and then matched with their application and vlan...  hostname ip domain Application  hostcode VLAN ABCD 10.1.1.1 Domain1 Application1 AB VLAN1 CDEF 10.1.1.2 Domain 1 Application2 CD VLAN2 When I add  | search VLAN=VLAN1, it shows only the first row.... same when I add VLAN2 BUT When if I add | search Application=Application1, no results.  If I add | search Application=*, no results.... Any ideas why this particular field will not return results?! Thanks!
Hi Team   I have a query where I am doing the TimeChart & % (not using the timechart and calculate the % in timechart line as this doesn't solve my purpose hence using it this say) The query i... See more...
Hi Team   I have a query where I am doing the TimeChart & % (not using the timechart and calculate the % in timechart line as this doesn't solve my purpose hence using it this say) The query is working fine however it shows all the data on field and I want to have that field only show top 10  by volume or count Query  index=xyz (catcode="*") (prodid="1") (prodcat="*") success="*" | bucket _time span="1d" | eval TheError=if(success="false" AND Error_Value like "%%",count,0) | eval Success=if(success="true",count,0) | stats sum(TheError) as "Failed", sum(Success) as "Passed", sum(count) as Total by _time, catcode | eval Failed_Percent=round((Failed/Total)*100,2) | fields _time, catcode, Failed_Percent | xyseries _time, catcode, Failed_Percent I don't want to do the 'eventstats' because it will count all on prodid level and not at catcode level hence this query This query counts all false with error on catcode....and count all attempts on individual catcode, then calculate the % with event stats the total count will be not at catcode but all prodid count i.e. all catcode's total attempt's count   Thanks in advance  
Went through the Knowledge Base looking for anything that I might have missed concerning Health Rule violations and policies. My goal is to have the ability to send an email notification if HTTP erro... See more...
Went through the Knowledge Base looking for anything that I might have missed concerning Health Rule violations and policies. My goal is to have the ability to send an email notification if HTTP errors occur.  I can generate http errors using my device but despite the occurrences no health rule is violated and no action is taken. Any help would be appreciated. More settings screenshots are available upon request.
Looking to change Navigation menu background color based on panel search criteria. Here idea is i don't want to go on each dashboard and see what all alerts are there. Just to see Green , Yellow ... See more...
Looking to change Navigation menu background color based on panel search criteria. Here idea is i don't want to go on each dashboard and see what all alerts are there. Just to see Green , Yellow and Red color change dynamically on Navigation menu then i will go to specific dashboard