All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

There is a bug in the Job "Share" button: It only works for admins! Analysts have mentioned that within the search head, they are unable to share jobs with one other. This means that while anyone ... See more...
There is a bug in the Job "Share" button: It only works for admins! Analysts have mentioned that within the search head, they are unable to share jobs with one other. This means that while anyone is investigating alerts, reviewing things, etc., he must share the actual search SPL query and whomever he is working with has to re-run that search again. I know that the default "admin" role allows any user to view the private jobs of others but of course we don't want to give the "admin" role to all of the analysts. I reviewed all of the Splunk "capabilities" and could not find one that would be related to this permission. I am going to open a case, but I am looking for a quicker work-around solution.
Very strange scenario. I'll use a rex statement to retrieve data and it works perfectly. If I copy and paste the rex command that Splunk used (Copied from Job Inspector) it does not work. I'll receiv... See more...
Very strange scenario. I'll use a rex statement to retrieve data and it works perfectly. If I copy and paste the rex command that Splunk used (Copied from Job Inspector) it does not work. I'll receive an error. An actual snippet of raw data that I've used as an example in my erex statement. The data in bold is what went into my example. "usbProtocol":1,"deviceName":"Canon Digital Camera","vendorName":"Canon Inc.", And the job inspector spat out the following: | rex "(?i)\"deviceName\\\":\\\"(?P<Device>[^\\]+)" And the data looked perfect, like so; Canon Digital Camera   But if I use that rex statement spat out by the Job Inspector in my search Splunk says nay nay; The error in Splunk received was "Error in 'rex' command: Encountered the following error while compiling the regex '(?i)"deviceName\":\"(?P<Device>[^\]+)': Regex: missing terminating ] for character class."   I reached out to a coworker that provided | rex ".*deviceName(?<Model>.*?)," And it works to a degree, but includes characters that I'd rather not see in my data. Actual example of what is spat out; \":\"Canon Digital Camera\" Just also mentioning this in case it matters - where there is no data available/null within the "deviceName" raw data, it will show like this; \":\"\" I'd really appreciate some guidance with my regex code. I've been delving into this lately, used many training materials, but can't seem to figure this one out?!
Hi,  Dashboard help is needed. I am attaching pictures. Please provide SPL.  I need to connect dynamically an item selected from the "Year OA Motives" drop-down to the tabs/buttons below (SPL n... See more...
Hi,  Dashboard help is needed. I am attaching pictures. Please provide SPL.  I need to connect dynamically an item selected from the "Year OA Motives" drop-down to the tabs/buttons below (SPL needed).  By Clicking each tab will open up a table containing the corresponding data in a panel below the tabs (SPL neeeded). Regards, Selina.
I have a dashboard created with network data already in that dashboard.  I am trying to create a search dropdown or search bar so I can search for a specific IP and it will show all the data for that... See more...
I have a dashboard created with network data already in that dashboard.  I am trying to create a search dropdown or search bar so I can search for a specific IP and it will show all the data for that IP in my dashboard.
There a about 3 ways to set up outputs.conf and  when you trying to setup forwarders.  you can either do a cli entry to add a forwarder server(and indexer) or you cand edit outputs .conf files We... See more...
There a about 3 ways to set up outputs.conf and  when you trying to setup forwarders.  you can either do a cli entry to add a forwarder server(and indexer) or you cand edit outputs .conf files We made the outputs .conf according to a tutorial we saw but were have issues getting data in. So the question is what is broken about our outputs.conf file  (also side note originallt the.102 address wasnt in the files and neither was default-autolb group) thanks    
Hi, I have a search where I am attempting to extracting 2 different fields from one string response using "rex":     1st Field: rex \"traceId\"\s:\s\"?(?<traceId>.*?)\" 2nd Field: rex "\"sta... See more...
Hi, I have a search where I am attempting to extracting 2 different fields from one string response using "rex":     1st Field: rex \"traceId\"\s:\s\"?(?<traceId>.*?)\" 2nd Field: rex "\"statusCode\"\s:\s\"?(?&lt;tstatusCode&gt;2\d{2}|4\d{2}|5\d{2})\"?"     I am attempting to "dedup" the 1st field (traceId) before I pipe those results into the 2nd field (statusCode).  I have attempted multiple variation based on Splunk threads and other internet resources.  Below is the query I am making:     index=myCoolIndex cluster_name="myCoolCluster" sourcetype=myCoolSourceType label_app=myCoolApp ("\"statusCode\"") | rex \"traceId\"\s:\s\"?(?<traceId>.*?)\" | dedup traceId | rex "\"statusCode\"\s:\s\"?(?&lt;tstatusCode&gt;2\d{2}|4\d{2}|5\d{2})\"?" //I have tried a lot of other permutations this is just one     Below is the response from the log (looks like JSON but it is string type):     \\Sample Log (Looks like JSON object, but its a string): "{ "correlationId" : "", "message" : "", "tracePoint" : "", "priority" : "", "category" : "", "elapsed" : 0, "locationInfo" : { "lineInFile" : "", "component" : "", "fileName" : "", "rootContainer" : "" }, "timestamp" : "", "content" : { "message" : "", "originalError" : { "statusCode" : "200", "errorPayload" : { "error" : "" } }, "standardizedError" : { "statusCode" : "500", "errorPayload" : { "errors" : [ { "error" : { "traceId" : "9539510-d8771da0-a7ce-11ed-921c-d6a73926c0ac", "errorCode" : "", "errorDescription" : "" "errorDetails" : "" } } ] } } }, }"     The intent of the query is to: Extract field "traceId", then "dedup" "traceId" (to remove duplicates), then extract field "statusCode" and sort "statusCode" values. When running these regEx's independently of eachother they work as expected, but I need to combine them into one query as I will be creating charts on my next step.....  All help is appreciated.  
Dear Splunkers, I just set up a little testing environment, with Splunk Enterprise running smoothly on a Debian server and a Universal Forwarder running on a Windows 10 machine. My goal is to send ... See more...
Dear Splunkers, I just set up a little testing environment, with Splunk Enterprise running smoothly on a Debian server and a Universal Forwarder running on a Windows 10 machine. My goal is to send some sysmon logs into Splunk. I first started to follow this page from the official Splunk documentation : https://docs.splunk.com/Documentation/AddOns/released/MSSysmon/Install But unfortunately, this page does not explain how to install the client side of the app (is there a client side anyway ? Well nothing is explained about it). What I did is that I set my inputs.conf file in etc/system/local on the client machine. It partially worked, as the data is being sent to the Splunk server, but none of the fancy dashboards and graphs that the Sysmon for Splunk add-on or app is supposed to display is available. I know I must have missed something on the client part, but I also have to mention here that the kafkaesque intricated tons-of-links Splunk documentation does not help me much, to say the least. It would be extremely nice if someone could turn on the light because so far, my Splunk journey is being black as midnight in a moonless night. Thanks a lot Gargantua
Hello group, I am trying to use DBConnect to connect to a hiver server required a kerberos authentification. jdbc:hive2://xxx2.company.com:2181,xxx1.company.com:2181,xxx3.company.com:2181/default... See more...
Hello group, I am trying to use DBConnect to connect to a hiver server required a kerberos authentification. jdbc:hive2://xxx2.company.com:2181,xxx1.company.com:2181,xxx3.company.com:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_HOST@ADDEV.COMPANY.COM I have the key tab file somewhere. Which config file I need to modify?
On Splunk 9.0.0 on windows on one of our dedicated Deployment servers when we go to Settings \ Forwarder Management in the Web UI we get a nice list of Clients aka systems running the Universal Forwa... See more...
On Splunk 9.0.0 on windows on one of our dedicated Deployment servers when we go to Settings \ Forwarder Management in the Web UI we get a nice list of Clients aka systems running the Universal Forwarder so far so good My question is how can I see what Search is generating this result ie this list of machines? unlike some of the other screens there is no magnifying glass icon to click on to invoke the search pane 
Hello Splunkers,   I have the following raw data 2023-02-15T12:43:06.774603-08:00 abc OpenSM[727419]: osm_spst_rcv_process: Switch 0x900a84030060ae40 MF0;www:MQM9700/U1 port 29 changed state from ... See more...
Hello Splunkers,   I have the following raw data 2023-02-15T12:43:06.774603-08:00 abc OpenSM[727419]: osm_spst_rcv_process: Switch 0x900a84030060ae40 MF0;www:MQM9700/U1 port 29 changed state from DOWN to INIT #012 2023-02-15T12:42:02.861268-08:00 abc OpenSM[727419]: osm_spst_rcv_process: Switch 0x900a84030060ae40 MF0;www:MQM9700/U1 port 29 changed state from ACTIVE to DOWN #012 I am using the below regex   index=abc "ACTIVE to DOWN #012" host=ufmc-ndr* Switch IN(*) port IN(*) | stats count by Switch port | rename count as "ACTIVE to DOWN Count" | appendcols [search index=abc "DOWN to INIT #012" host=ufmc-ndr* Switch IN(*) port IN(*) | stats count by Switch port | rename count as "DOWN to INIT Count"] | sort - "ACTIVE to DOWN Count"     I am trying to count the total events with "ACTIVE TO DOWN" by switch and port and also for "DOWN TO INIT"...If i run the search separately I am getting the correct count but when I join both its not showing correct values . I want to have A table panel with fields Switch port  "ACTIVE to DOWN Count" " DOWN to INIT Count" Thanks in Advance 
Is there a way to configure Splunk to make scheduled searches having a higher priority than ad-hoc searches?  I know that we can do it using the Workload Management but since we don't use it yet, I... See more...
Is there a way to configure Splunk to make scheduled searches having a higher priority than ad-hoc searches?  I know that we can do it using the Workload Management but since we don't use it yet, I hope there is a way to do it without the Workload Management.
I have a use case in which I am attempting to create a historical KV Store, with key field value pairs as: host = string (attempting to use as unique _key) Vulnerabilities discovered on 1/1 = str... See more...
I have a use case in which I am attempting to create a historical KV Store, with key field value pairs as: host = string (attempting to use as unique _key) Vulnerabilities discovered on 1/1 = string Vulnerabilities discovered on 1/8 = string Vulnerabilities discovered on 1/15 = string   For the initial data, I am trying to list out all the vulnerability signatures that have been discovered on each device from a weekly scan delimited by ";" in an effort to consolidate all the signatures discovered, while aiming to update this for each output of the weekly scan by using the host as the _key value. I have this example query that explains what I am attempting to initially do with the scan output on 1/1: index=vulnerabilitydata | stats values(signature) as Signatures) by host | eval Signatures=mvjoin(Signatures, "; ") | rename Signatures as "Vulnerabilities scanned in 1/1 | outputlook HISTORICAL_VULN_KV As you can imagine, I get a list of vulnerability signatures in a multivalue field for each host in our network. What I am trying to figure out is how I can craft a scheduled weekly search to: 1. Append new vulnerability signatures to each host, if a host is already listed as a _key unique value within the existing KVStore 2. Create a new record if a new host (_key) has discovered vulnerabilities to start tracking this for future scans 3. Bonus: Dynamically rename Signatures to the scan date, in my experience with using rename it seems to only rename with a static string value, without being able to incorporate Splunk timestamp logic for dynamic field naming.    I feel like this should be rather easy to accomplish, but I have tried messing around with append=true and subsearches to get this to update new scan data for each host but I have been running into issues with updating, and maintaining results for multiple separate scan dates across each host. I would like capture everything properly, and then using outputlookup to update the original KV store to maintain a historical record for each weekly scan over time. Do I need to be more mindful of how I could be using outputlookup in conjunction with key_field=host/_key?  
I've got a dashboard where i want to append multiple searches together based on checkbox inputs, but the variables aren't resolving?  I can get conditional tokens to set the entire firewall_search st... See more...
I've got a dashboard where i want to append multiple searches together based on checkbox inputs, but the variables aren't resolving?  I can get conditional tokens to set the entire firewall_search string,  but I don't want to have to setup all the available options if i can just get the variables to use their value instead of the variable. Summary - I am asking for $src_ip$ and then trying to use that in a checkbox token value, nested in another variable, but it doesn't resolve the nested variable, and instead i just get the $src_ip$ name example xml <input type="text" token="src_ip">   <label>Source IP</label> </input> <input type="checkbox" token="firewall_search">   <label>Firewalls</label>   <choice value="append [search index=index1 src_ip=$src_ip$  other_specific_search_Stuff_to_index1 ]">firewall 1</choice> <choice value="append [search index=index2 src_ip=$src_ip$  searches_different_from_1 ]">firewall 2</choice> <delimiter> | </delimiter> </input> </fieldset> <row> <panel> <title>dependent search</title> <table> <search> <query>$firewall_search$ | stats count by index, src_ip, otherfields</query> <earliest>$time_pick.earliest$</earliest> <latest>$time_pick.latest$</latest> </search> When i run that, $firewall_search$ resolves to the choice value listed, but doesn't resolve $src_ip$ to the value set in the form
I'm logged into my system as an admin, so I have access to all the indexes. I've also verified this by looking at the admin role. It indeed has access to all the indexes. However, when I run the belo... See more...
I'm logged into my system as an admin, so I have access to all the indexes. I've also verified this by looking at the admin role. It indeed has access to all the indexes. However, when I run the below two searches  I get different counts. I would think I should get the same count. The first one gives me a lower count. It's a pretty low volume dev system so the counts are low. They are different by about 20,000 events. I ran it with a time range of yesterday so that the counts wouldn't change. Does anyone have any thoughts on what could be causing this?   | tstats count ```the count was 95835```   vs   | tstats count where index IN (*) ```the counts was 115339```  
Hello, I have created a KVstore lookup, when I try to perform outputlookup on the kvstore, I get the following error. Error in 'outputlookup' command: Could not write to collection 'test1': An er... See more...
Hello, I have created a KVstore lookup, when I try to perform outputlookup on the kvstore, I get the following error. Error in 'outputlookup' command: Could not write to collection 'test1': An error occurred while saving to the KV Store When I refer to the log file, it gives the following information, 02-16-2023 16:56:51.841 ERROR KVStoreLookup [79866 phase_1] - KV Store output failed with err: The provided query was invalid. (Document may not contain '$' or '.' in keys.) message: 02-16-2023 16:56:51.841 ERROR SearchResultsFiles [79866 phase_1] - An error occurred while saving to the KV Store. Look at search.log for more information. 02-16-2023 16:56:51.905 ERROR outputcsv [79866 phase_1] - An error occurred during outputlookup, managed to write 0 rows 02-16-2023 16:56:51.905 ERROR outputcsv [79866 phase_1] - Error in 'outputlookup' command: Could not write to collection 'test1': An error occurred while saving to the KV Store. Look at search.log for more information.. Can you please advise on this. Thanks in advance. Siddarth
Hi all, I'm working on a dashboard in which I populate a panel with summary data. The summary data runs once per hour over a very large dataset, looking back over the past 60 minutes, generating a... See more...
Hi all, I'm working on a dashboard in which I populate a panel with summary data. The summary data runs once per hour over a very large dataset, looking back over the past 60 minutes, generating an | xyseries of response codes across another split by field (field1).  What I'd like to accomplish is to fill the gap in summary data with live data with a subsearch, if possible. Essentially, the panel will be half filled during the gap between summary data. E.G. it ran at 10 AM, and you're looking at 10:30AM. It also functions poorly for small time increments - say 15 minutes. For now, I'm limited by resources and cannot refine the summary index load to run more frequently. I expect this would fix the problem (say, if it ran every 5 minutes). Is it possible to dynamically generate a timeframe for a live search based on when the summary data sees a gap?  Within the panel, I pull summary data like so:  index=summary source="summaryName" sourcetype=stash search_name="summaryName field1=* | stats sum(resp*) AS resp* BY _time | untable _time responseCode Volume ``` this part is just processing to apply the description I need to the response code ``` | eval trimmedCode=substr(responseCode, -2) | lookup responseLookup.csv ResponseCode AS trimmedCode OUTPUT ResponseDescription | eval ResponseDescription=if(isnull(ResponseDescription), " Description Not Available", ResponseDescription) | eval Response=trimmedCode+" "+ResponseDescription ``` since volume is established by the summary index, sum(Volume) here ``` | timechart partial=f limit=0 span=15m sum(Volume) AS Volume BY Response I then append a subsearch here to pull the same info from live data, using another timechart after the processing:  | timechart span=15m limit=0 count AS Volume by Response  ``` presumably here, I would need to sum(Volume) AS Volume to add up the totals``` I looked into some other searches that pull a single event from a summary index, and tried to generate timestamps based on that, but I haven't had much luck. Like so: index=summary source=summaryName sourcetype=stash | head 1 | eval SourceData="SI" | append [| makeresults | eval SourceData="gentimes"] | head 1 | addinfo | eval earliest=if(SourceData="SI",if(_time>info_min_time,_time,info_min_time),info_min_time) | eval latest=info_max_time | table earliest,latest Any ideas? Is this a possibility or am I limited by the summary data?
Hello All, I'm a Splunk newbie and i have 3 questions. In our newly Splunk Deployment we have a Search Head, 1 Deployment Server, 2 Indexers, and 2 forwarders and we are set up in a Distributed env... See more...
Hello All, I'm a Splunk newbie and i have 3 questions. In our newly Splunk Deployment we have a Search Head, 1 Deployment Server, 2 Indexers, and 2 forwarders and we are set up in a Distributed environment. Question #1 - Is there a streamlined way or quicker way to setup each server using CLI on both linux and windows.  Question #2 - Do you never need to install apps or add-ons on every server to get all splunk deployment to work? Question #3 - How would i know what add-ons and apps i would need that are important to our deployment? This is a project of mine and time is a factor. Please assist.  Thanks, RPM
I am working on a dashboard for my help desk and would like to embed a website, https://status.twilio.com/ so that staff can easily view from the vendor if they are experiencing outages all from a si... See more...
I am working on a dashboard for my help desk and would like to embed a website, https://status.twilio.com/ so that staff can easily view from the vendor if they are experiencing outages all from a single pane of glass.  I can see in the old versions of splunk with html dashboards how to do this but not with the new version of dashboards and am not having much luck trying to figure this out.  I attempted the below via a drill down but clearly not getting what I want, which is the content to hopefully display as an embeded page within the dashboard.  Granted my drill down doesnt work either.  Below is my code for the panel itself where i am trying to get this working.  Not being a coder and not familiar with XML i realize i am asking for quite a bit of help.  Any insight anyone can offer is greatly appreciated.   {     "type": "splunk.table",     "options": {         "count": 100,         "dataOverlayMode": "none",         "drilldown": "none",         "percentagesRow": false,         "rowNumbers": false,         "totalsRow": false,         "wrap": true     },     "dataSources": {},     "title": "Twilio Status",     "description": "",     "eventHandlers": [         {             "type": "drilldown.customUrl",             "options": {                 "url": "https://status.twilio.com/"             }         }     ],     "context": {},     "showProgressBar": false,     "showLastUpdated": false }
is there any Splunk Add-on for oracle ZFS appliance to collect data.  
I downloaded the app to my HF and send their my alarms from vcenter via snmp. I also condigured the following SNMP input and opened the ports needed. What should I do for it to work so I can... See more...
I downloaded the app to my HF and send their my alarms from vcenter via snmp. I also condigured the following SNMP input and opened the ports needed. What should I do for it to work so I can see the events (in my sh of course)? What am I missing??