All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a search query that outputs the count of the event for all the host (i.e., | stats count by host) Now if the count is greater than 5,(for say host 1 and host 2 together gives more than 5 coun... See more...
I have a search query that outputs the count of the event for all the host (i.e., | stats count by host) Now if the count is greater than 5,(for say host 1 and host 2 together gives more than 5 counts),an alert has to be triggered.. Let me know how.. TIA @Anonymous 
Need to calculate the percentage of two columns- I have a search that gives me a total of two columns and I need to get the percentage like this: is what % column "Today" is of column "Grand"  Here ... See more...
Need to calculate the percentage of two columns- I have a search that gives me a total of two columns and I need to get the percentage like this: is what % column "Today" is of column "Grand"  Here is the search I'm using Here is the search `duo_index` extracted_eventtype=authentication NOT auth_log_version=v2 result=SUCCESS | eval factor=if(factor=="n/a",reason,factor) | where factor!="None" and factor!="null" | eval factor=upper(factor) | stats count by factor | eventstats sum(count) as total |appendpipe [stats sum(count) as "equal"] |append[search `duo_index` extracted_eventtype=authentication NOT auth_log_version=v2 result=FAILURE | eval factor=if(factor=="n/a",reason,factor) | where factor!="None" and reason!="null" | stats count by reason | eventstats sum(count) as total |appendpipe [stats sum(count) as "total2"] ] |eval perc=(total2/equal)*100 |table perc equal total2   Thank you
I am trying to follow the instructions from this Splunk conf19:  https://conf.splunk.com/watch/conf-online.html?search=FN1315#/ I am stuck where I need to update the auth token in CYA_Import_Splunk... See more...
I am trying to follow the instructions from this Splunk conf19:  https://conf.splunk.com/watch/conf-online.html?search=FN1315#/ I am stuck where I need to update the auth token in CYA_Import_Splunk_Query  I am using Okta with Splunk and distributed Splunk Search Heads. Any suggestions how I can get this working for the curl command to update/create the knowledge object files.  
As the title said, if we have a field: "sourcetype=log4j" for all result, Should I add it to the search or remove it from our search to reduce the search time?
I am starting to add some JS elements to dashboards.  I can't quite get the text input box to work like I think it should be. (nor is the time range working, but I hadn't worked on that really). I fe... See more...
I am starting to add some JS elements to dashboards.  I can't quite get the text input box to work like I think it should be. (nor is the time range working, but I hadn't worked on that really). I feel like I am close but I have tried a few options and it isn't loading.   <form script="js/views/js_testing.js"> <label>JS Testing</label> <fieldset submitButton="true"> <input type="text" token="ip_field1"> <label>Test Value</label> </input> <input type="time" token="field1"> <label>Time Picker</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>Test data - $ip_field1$</title> <html> <div id="mytable1"/> </html> </panel> </row> </form>     Javascript&colon;   // LIBRARIES require([ "splunkjs/mvc", "splunkjs/mvc/searchmanager", "splunkjs/mvc/singleview", "splunkjs/mvc/tableview", "splunkjs/mvc/simplexml/ready!" ], function( mvc, SearchManager, SingleView, TableView ) { // SEARCH MANAGERS var search1 = new SearchManager({ id: "connections", search: "index=zeek_conn id.orig_h=$ip_field1$ OR id.resp_h=$ip_field1$ | table _time id.orig_h id.resp_h", preview: true, cache: true, earliest_time: "-30d", latest_time: "now" }, {tokens: true}); // TOKENS var tokens = mvc.Components.get("default"); $(document).on("click", "#submit", function(e){ var tok1 = tokens.get("ip_field1"); if (tok1 == undefined || tok1 == ""){ tokens.set("ip_field1", "*"); } }); var mytoken = tokens.get("ip_field1"); // INSERT_TABLE_VIEW var table1 = new TableView({ id: "my-table1", managerid: "connections", el: $("#mytable1") }).render(); });     Thanks!
  I am experiencing this as a continues notification in my environment: Search peer  has the following message: The number of search artifacts in the dispatch directory is higher than recommended (... See more...
  I am experiencing this as a continues notification in my environment: Search peer  has the following message: The number of search artifacts in the dispatch directory is higher than recommended (count=12025, warning threshold=10000) and could have an impact on search performance. Remove excess search artifacts using the "splunk clean-dispatch" CLI command, and review artifact retention policies in limits.conf and savedsearches.conf. You can also raise this warning threshold in limits.conf what could be the key checkpoints for this investigation. I would like to find out the root cause as which user is triggering those searches or which dashboards are not optimized or any other specific reason for this? I am already running rest searches and checking schedule searches running every min but Any pointers will be helpful to make sure what are the solutions available for this. 
I would like to find out dashboards which are not optimized and each panel is triggering the independent search and consuming all resources.  Is there a splunk search to identify the dashboard so th... See more...
I would like to find out dashboards which are not optimized and each panel is triggering the independent search and consuming all resources.  Is there a splunk search to identify the dashboard so that I can educate the users to use post process searches and optimize the dashboard and resource consumptions.  
Some of the data coming in from one of our indexes is doing the following( It appears data is repeating for each field): ip                                                            User     ... See more...
Some of the data coming in from one of our indexes is doing the following( It appears data is repeating for each field): ip                                                            User                         System 192.168.1.1 192.168.1.1            BOB BOB             ABC ABC How can I get the data so it only shows one field value per field? (how to get it to stop repeating the same data in each field)? ip                                 User              System 192.168.1.1             BOB             ABC Dedup obviously won't work in this instance.   
Our Splunk environment appears to have been installed with the basic installation and just handed over to our development staff, who have done some amazing things with it for our application monitori... See more...
Our Splunk environment appears to have been installed with the basic installation and just handed over to our development staff, who have done some amazing things with it for our application monitoring and such.  I recently noticed this and have upgraded it from 7.1 to 7.3 Enterprise and then to 8.1 Enterprise.  I also went in and upgraded the majority of the applications ensuring that I keep the majority on the app version that supported both 7.x and the new 8.x.  Our Splunk App for Windows Infrastructure is currently version 4.8.4 and research is telling me to upgrade to version 5.0 prior to moving on to version 7.  That is where my question comes in: If I open up the app - it's at the Setup stage, meaning it docent even appear to have been installed.  If that is the case can I not just delete the version 4.8.4, and go directly to version 7 and set that up?  Or should I go to 5.0 and then up to make sure?  Is there anyway to verify?  Thanks for any advice - we are just a small shop (which may be why it was setup as it was), so starting fresh with a new install would be easier for me to understand but I don't want to take a short cut either and get burt.    Setup the Splunk App for Windows Infrastructure Prerequisites - Requirements for the app Check Data - Verify data is coming into Splunk Customize Features - Detect and choose which features to use     Splunk v7.2.0+ OK: Splunk v8.1.0 detected OK: Key value store is enabled. Learn more.     Splunk Add-on for Microsoft Windows v7.0.0 Update required: v4.8.4 installed. It does not match with v7.0.0   Splunk Supporting Add-on for Microsoft Windows Active Directory v3.0.1 OK: Splunk Supporting Add-on for Microsoft Windows Active Directory v3.0.1 detected  
Good day everyone. I am looking for a way to add server-specific information to events that are forwarded to my Splunk Indexers via Universal Forwarders. The additional information would be related... See more...
Good day everyone. I am looking for a way to add server-specific information to events that are forwarded to my Splunk Indexers via Universal Forwarders. The additional information would be related to something customer or service specific information that would make searching for logs easier than needing to know server hostnames (which are generic and would not be specific to a customer or function). For instance, logs ingested from Server A would be associated with Customer X and component Y, but the same log files ingested from Server B would be associated with Customer Z and component A. This would help my engineers to find issues with Customer X or component Z without needing to know specific server hostnames - which could change day to day  or week to week. I have been scouring through the documentation but have not found anything that has jumped out at me as the solution, or something possible. Is this capability something that exists? And if so, how could it be implemented? Thank you.
So I have a large JSON array that is now being brought in and ingested correctly, but I cannot do any stats function on it. What I'd like to do are things like this, but the below search just brings ... See more...
So I have a large JSON array that is now being brought in and ingested correctly, but I cannot do any stats function on it. What I'd like to do are things like this, but the below search just brings in the same value for each name.     index=storage sourcetype="netbackup:license" | spath output=Name path=data{}.attributes.clientDetails{}.clientName | spath output=ClientConsumptionMB path=data{}.attributes.clientDetails{}.clientConsumptionMB | spath output=PolicyName path=data{}.attributes.clientDetails{}.policyDetails{}.policyName | spath output=PolicyType path=data{}.attributes.clientDetails{}.policyDetails{}.policyType |stats last(ClientConsumptionMB) by Name         So then I tried to do this.       index=storage sourcetype="netbackup:license" | spath output=Name path=data{}.attributes.clientDetails{}.clientName | spath output=ClientConsumptionMB path=data{}.attributes.clientDetails{}.clientConsumptionMB | spath output=PolicyName path=data{}.attributes.clientDetails{}.policyDetails{}.policyName | spath output=PolicyType path=data{}.attributes.clientDetails{}.policyDetails{}.policyType | eval Name=upper(Name) | eval NameCount=mvzip(Name,ClientConsumptionMB) | mvexpand NameCount | eval mvNameCount=split(NameCount,",") | eval Name=mvindex(mvNameCount,0) | eval ClientConsumptionMB=mvindex(mvNameCount,1) | stats last(ClientConsumptionMB) by Name       And ran into a 300 line limit for mvexpand. Help?
I just installed an update on a RedHat 7 system to upgrade our Splunk ES from v8.0 to 8.1.1. Upgrade appeared to go fine. When I start the services now on the 8.1.1 version, I am able to log in throu... See more...
I just installed an update on a RedHat 7 system to upgrade our Splunk ES from v8.0 to 8.1.1. Upgrade appeared to go fine. When I start the services now on the 8.1.1 version, I am able to log in through web but the search app is just stuck loading. I can navigate to other apps that appear ok inside of Splunk web but when I try to navigate to search and reporting, it just says loading.
Hello. I am configuring SAML. If my SAML configuration fails, I do not want to get locked out. I want to create a local admin user as a way to get into the system. I want to configure access control ... See more...
Hello. I am configuring SAML. If my SAML configuration fails, I do not want to get locked out. I want to create a local admin user as a way to get into the system. I want to configure access control such that first local user authentication is done. Only if no local user is found, SAML (Or LDAP) authentication should kick in. Is there such a priority setting in Splunk?
I know this was probably answered before, but I am not able to find any answers... I am trying to install the Splunk UF on a Linux server after having to manually uninstall it because of overlappin... See more...
I know this was probably answered before, but I am not able to find any answers... I am trying to install the Splunk UF on a Linux server after having to manually uninstall it because of overlapping 7.2.3 (.tgz) and 8.1.0 (.rpm) packages. I am trying to install the 8.1.0 rpm but get the error that it is already installed. When I try to uninstall it since the error says it's installed, then it says that it is already installed. I can't reboot the server because of operations, but would like to have Splunk operational and reporting to the indexer. Can anyone help with guidance on how to overcome this error? Thank you for any assistance that can be provided.
Can anyone give me any hints as to what I might be doing wrong. I have this query in a scheduled real-time alert where I'm hoping to retain the lastupdated time and lastfault time in a kvstore. If I... See more...
Can anyone give me any hints as to what I might be doing wrong. I have this query in a scheduled real-time alert where I'm hoping to retain the lastupdated time and lastfault time in a kvstore. If I run the query interactively I get the results I expect, however, running the query as a scheduled real-time alert and nothing is updated in the kvstore. Any help would be appreciated. sourcetype="web-heartbeat" `website_monitoring_search_index` `filter_inoperable` | eval time=_time | eval response_time=total_time | convert ctime(time) | fillnull response_code value="Connection failed" | eval response=if(timed_out == "True", "Connection timed out", response_code) | eval response=if(response_code="", "Connection failed", response_code) | eval state=response | eval _key=title | eval lastupdated=time() | eval lastfault=time() | fields - _raw _time | fields _key time title host url response_code response state lastupdated lastfault | outputlookup website_monitoring_state append=false key_field=_key
Greetings, I am having issues with my heavy forwarder getting data into my indexers without having a local indexes.conf containing the index name. I am doing all .conf work from the cli and not the... See more...
Greetings, I am having issues with my heavy forwarder getting data into my indexers without having a local indexes.conf containing the index name. I am doing all .conf work from the cli and not the webUI.   The issue is the "forwardedindex.filter.disbale=true" is not working as expected and I have to either: 1.  Create a local copy of the index I want to send to in indexes.conf 2. Add the index name to the whitelist setting for outputs.conf Otherwise data does not get sent to the indexers. Assistance please. Here is my output.conf for example: [tcpout] defaultGroup = test_indexers forwardedindex.filter.disable = true indexAndForward = false [tcpout:test_indexers] #server = <ip address>:<9996> server = x.x.x.x:9996,x.x.x.x:9996 disabled = false sslPassword = <nope> sslCertPath = $SPLUNK_HOME sslRootCAPath = $SPLUNK_HOME
Hello splunkers, I don't now if my title makes sense but here is the situation : I have an alert called buy signal and another called sell signal. I want to make sure that only my buy signal al... See more...
Hello splunkers, I don't now if my title makes sense but here is the situation : I have an alert called buy signal and another called sell signal. I want to make sure that only my buy signal alert can be triggered ONCE and after i want to make sure only my sell signal alert can be triggered ONCE etc etc .. Do you now a parameter that can do this ? Thank you for your help
Mods please delete this duplicate post.
Hello.  I have a search that results in, amongst other things, fields that are ALMOST duplicates.  Example: Bob: Task incomplete. Steve: Task Incomplete. George: Task Incomplete. Fred: Task Compl... See more...
Hello.  I have a search that results in, amongst other things, fields that are ALMOST duplicates.  Example: Bob: Task incomplete. Steve: Task Incomplete. George: Task Incomplete. Fred: Task Complete. Spock: Task Complete. Is there a way to generate a table which results in: | Task Status           |  Team Player Responsible | | Task Complete    |  Fred, Spock                            | | Task Incomplete |  Bob, Steve, George              | For each instance of "Task Complete" and "Task Incomplete", I want to know which Persons to tag in a simple table without listing every user in a giant list. I've looked into dedup, uniq and sub-searches but haven't figured out how to list the non-duplicated parts next to a single instance of the duplicated part. [Insert deity], I hope that made sense.  Help?
Greetings! I am dealing with following directory structure; var/log/myfolder/log-type_a.log var/log/myfolder/log-type_b.log var/log/myfolder/log-type_b1.log var/log/myfolder/log-type_b2.log var... See more...
Greetings! I am dealing with following directory structure; var/log/myfolder/log-type_a.log var/log/myfolder/log-type_b.log var/log/myfolder/log-type_b1.log var/log/myfolder/log-type_b2.log var/log/myfolder/log-type_c.log I want block log-type_b.log,log-type_b1.log, log-type_b2.log My inputs.conf [default] index = my_default_index [blacklist:/var/log/myfolder/log-type_b*] I have tried different variations for blacklist [blacklist:///var/log/myfolder/log-type_b*] The above blacklist stanza is not working. Please let me know what am I missing.