All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunkers,   I have the following raw data 2023-02-15T12:43:06.774603-08:00 abc OpenSM[727419]: osm_spst_rcv_process: Switch 0x900a84030060ae40 MF0;www:MQM9700/U1 port 29 changed state from ... See more...
Hello Splunkers,   I have the following raw data 2023-02-15T12:43:06.774603-08:00 abc OpenSM[727419]: osm_spst_rcv_process: Switch 0x900a84030060ae40 MF0;www:MQM9700/U1 port 29 changed state from DOWN to INIT #012 2023-02-15T12:42:02.861268-08:00 abc OpenSM[727419]: osm_spst_rcv_process: Switch 0x900a84030060ae40 MF0;www:MQM9700/U1 port 29 changed state from ACTIVE to DOWN #012 I am using the below regex   index=abc "ACTIVE to DOWN #012" host=ufmc-ndr* Switch IN(*) port IN(*) | stats count by Switch port | rename count as "ACTIVE to DOWN Count" | appendcols [search index=abc "DOWN to INIT #012" host=ufmc-ndr* Switch IN(*) port IN(*) | stats count by Switch port | rename count as "DOWN to INIT Count"] | sort - "ACTIVE to DOWN Count"     I am trying to count the total events with "ACTIVE TO DOWN" by switch and port and also for "DOWN TO INIT"...If i run the search separately I am getting the correct count but when I join both its not showing correct values . I want to have A table panel with fields Switch port  "ACTIVE to DOWN Count" " DOWN to INIT Count" Thanks in Advance 
Is there a way to configure Splunk to make scheduled searches having a higher priority than ad-hoc searches?  I know that we can do it using the Workload Management but since we don't use it yet, I... See more...
Is there a way to configure Splunk to make scheduled searches having a higher priority than ad-hoc searches?  I know that we can do it using the Workload Management but since we don't use it yet, I hope there is a way to do it without the Workload Management.
I have a use case in which I am attempting to create a historical KV Store, with key field value pairs as: host = string (attempting to use as unique _key) Vulnerabilities discovered on 1/1 = str... See more...
I have a use case in which I am attempting to create a historical KV Store, with key field value pairs as: host = string (attempting to use as unique _key) Vulnerabilities discovered on 1/1 = string Vulnerabilities discovered on 1/8 = string Vulnerabilities discovered on 1/15 = string   For the initial data, I am trying to list out all the vulnerability signatures that have been discovered on each device from a weekly scan delimited by ";" in an effort to consolidate all the signatures discovered, while aiming to update this for each output of the weekly scan by using the host as the _key value. I have this example query that explains what I am attempting to initially do with the scan output on 1/1: index=vulnerabilitydata | stats values(signature) as Signatures) by host | eval Signatures=mvjoin(Signatures, "; ") | rename Signatures as "Vulnerabilities scanned in 1/1 | outputlook HISTORICAL_VULN_KV As you can imagine, I get a list of vulnerability signatures in a multivalue field for each host in our network. What I am trying to figure out is how I can craft a scheduled weekly search to: 1. Append new vulnerability signatures to each host, if a host is already listed as a _key unique value within the existing KVStore 2. Create a new record if a new host (_key) has discovered vulnerabilities to start tracking this for future scans 3. Bonus: Dynamically rename Signatures to the scan date, in my experience with using rename it seems to only rename with a static string value, without being able to incorporate Splunk timestamp logic for dynamic field naming.    I feel like this should be rather easy to accomplish, but I have tried messing around with append=true and subsearches to get this to update new scan data for each host but I have been running into issues with updating, and maintaining results for multiple separate scan dates across each host. I would like capture everything properly, and then using outputlookup to update the original KV store to maintain a historical record for each weekly scan over time. Do I need to be more mindful of how I could be using outputlookup in conjunction with key_field=host/_key?  
I've got a dashboard where i want to append multiple searches together based on checkbox inputs, but the variables aren't resolving?  I can get conditional tokens to set the entire firewall_search st... See more...
I've got a dashboard where i want to append multiple searches together based on checkbox inputs, but the variables aren't resolving?  I can get conditional tokens to set the entire firewall_search string,  but I don't want to have to setup all the available options if i can just get the variables to use their value instead of the variable. Summary - I am asking for $src_ip$ and then trying to use that in a checkbox token value, nested in another variable, but it doesn't resolve the nested variable, and instead i just get the $src_ip$ name example xml <input type="text" token="src_ip">   <label>Source IP</label> </input> <input type="checkbox" token="firewall_search">   <label>Firewalls</label>   <choice value="append [search index=index1 src_ip=$src_ip$  other_specific_search_Stuff_to_index1 ]">firewall 1</choice> <choice value="append [search index=index2 src_ip=$src_ip$  searches_different_from_1 ]">firewall 2</choice> <delimiter> | </delimiter> </input> </fieldset> <row> <panel> <title>dependent search</title> <table> <search> <query>$firewall_search$ | stats count by index, src_ip, otherfields</query> <earliest>$time_pick.earliest$</earliest> <latest>$time_pick.latest$</latest> </search> When i run that, $firewall_search$ resolves to the choice value listed, but doesn't resolve $src_ip$ to the value set in the form
I'm logged into my system as an admin, so I have access to all the indexes. I've also verified this by looking at the admin role. It indeed has access to all the indexes. However, when I run the belo... See more...
I'm logged into my system as an admin, so I have access to all the indexes. I've also verified this by looking at the admin role. It indeed has access to all the indexes. However, when I run the below two searches  I get different counts. I would think I should get the same count. The first one gives me a lower count. It's a pretty low volume dev system so the counts are low. They are different by about 20,000 events. I ran it with a time range of yesterday so that the counts wouldn't change. Does anyone have any thoughts on what could be causing this?   | tstats count ```the count was 95835```   vs   | tstats count where index IN (*) ```the counts was 115339```  
Hello, I have created a KVstore lookup, when I try to perform outputlookup on the kvstore, I get the following error. Error in 'outputlookup' command: Could not write to collection 'test1': An er... See more...
Hello, I have created a KVstore lookup, when I try to perform outputlookup on the kvstore, I get the following error. Error in 'outputlookup' command: Could not write to collection 'test1': An error occurred while saving to the KV Store When I refer to the log file, it gives the following information, 02-16-2023 16:56:51.841 ERROR KVStoreLookup [79866 phase_1] - KV Store output failed with err: The provided query was invalid. (Document may not contain '$' or '.' in keys.) message: 02-16-2023 16:56:51.841 ERROR SearchResultsFiles [79866 phase_1] - An error occurred while saving to the KV Store. Look at search.log for more information. 02-16-2023 16:56:51.905 ERROR outputcsv [79866 phase_1] - An error occurred during outputlookup, managed to write 0 rows 02-16-2023 16:56:51.905 ERROR outputcsv [79866 phase_1] - Error in 'outputlookup' command: Could not write to collection 'test1': An error occurred while saving to the KV Store. Look at search.log for more information.. Can you please advise on this. Thanks in advance. Siddarth
Hi all, I'm working on a dashboard in which I populate a panel with summary data. The summary data runs once per hour over a very large dataset, looking back over the past 60 minutes, generating a... See more...
Hi all, I'm working on a dashboard in which I populate a panel with summary data. The summary data runs once per hour over a very large dataset, looking back over the past 60 minutes, generating an | xyseries of response codes across another split by field (field1).  What I'd like to accomplish is to fill the gap in summary data with live data with a subsearch, if possible. Essentially, the panel will be half filled during the gap between summary data. E.G. it ran at 10 AM, and you're looking at 10:30AM. It also functions poorly for small time increments - say 15 minutes. For now, I'm limited by resources and cannot refine the summary index load to run more frequently. I expect this would fix the problem (say, if it ran every 5 minutes). Is it possible to dynamically generate a timeframe for a live search based on when the summary data sees a gap?  Within the panel, I pull summary data like so:  index=summary source="summaryName" sourcetype=stash search_name="summaryName field1=* | stats sum(resp*) AS resp* BY _time | untable _time responseCode Volume ``` this part is just processing to apply the description I need to the response code ``` | eval trimmedCode=substr(responseCode, -2) | lookup responseLookup.csv ResponseCode AS trimmedCode OUTPUT ResponseDescription | eval ResponseDescription=if(isnull(ResponseDescription), " Description Not Available", ResponseDescription) | eval Response=trimmedCode+" "+ResponseDescription ``` since volume is established by the summary index, sum(Volume) here ``` | timechart partial=f limit=0 span=15m sum(Volume) AS Volume BY Response I then append a subsearch here to pull the same info from live data, using another timechart after the processing:  | timechart span=15m limit=0 count AS Volume by Response  ``` presumably here, I would need to sum(Volume) AS Volume to add up the totals``` I looked into some other searches that pull a single event from a summary index, and tried to generate timestamps based on that, but I haven't had much luck. Like so: index=summary source=summaryName sourcetype=stash | head 1 | eval SourceData="SI" | append [| makeresults | eval SourceData="gentimes"] | head 1 | addinfo | eval earliest=if(SourceData="SI",if(_time>info_min_time,_time,info_min_time),info_min_time) | eval latest=info_max_time | table earliest,latest Any ideas? Is this a possibility or am I limited by the summary data?
Hello All, I'm a Splunk newbie and i have 3 questions. In our newly Splunk Deployment we have a Search Head, 1 Deployment Server, 2 Indexers, and 2 forwarders and we are set up in a Distributed env... See more...
Hello All, I'm a Splunk newbie and i have 3 questions. In our newly Splunk Deployment we have a Search Head, 1 Deployment Server, 2 Indexers, and 2 forwarders and we are set up in a Distributed environment. Question #1 - Is there a streamlined way or quicker way to setup each server using CLI on both linux and windows.  Question #2 - Do you never need to install apps or add-ons on every server to get all splunk deployment to work? Question #3 - How would i know what add-ons and apps i would need that are important to our deployment? This is a project of mine and time is a factor. Please assist.  Thanks, RPM
I am working on a dashboard for my help desk and would like to embed a website, https://status.twilio.com/ so that staff can easily view from the vendor if they are experiencing outages all from a si... See more...
I am working on a dashboard for my help desk and would like to embed a website, https://status.twilio.com/ so that staff can easily view from the vendor if they are experiencing outages all from a single pane of glass.  I can see in the old versions of splunk with html dashboards how to do this but not with the new version of dashboards and am not having much luck trying to figure this out.  I attempted the below via a drill down but clearly not getting what I want, which is the content to hopefully display as an embeded page within the dashboard.  Granted my drill down doesnt work either.  Below is my code for the panel itself where i am trying to get this working.  Not being a coder and not familiar with XML i realize i am asking for quite a bit of help.  Any insight anyone can offer is greatly appreciated.   {     "type": "splunk.table",     "options": {         "count": 100,         "dataOverlayMode": "none",         "drilldown": "none",         "percentagesRow": false,         "rowNumbers": false,         "totalsRow": false,         "wrap": true     },     "dataSources": {},     "title": "Twilio Status",     "description": "",     "eventHandlers": [         {             "type": "drilldown.customUrl",             "options": {                 "url": "https://status.twilio.com/"             }         }     ],     "context": {},     "showProgressBar": false,     "showLastUpdated": false }
is there any Splunk Add-on for oracle ZFS appliance to collect data.  
I downloaded the app to my HF and send their my alarms from vcenter via snmp. I also condigured the following SNMP input and opened the ports needed. What should I do for it to work so I can... See more...
I downloaded the app to my HF and send their my alarms from vcenter via snmp. I also condigured the following SNMP input and opened the ports needed. What should I do for it to work so I can see the events (in my sh of course)? What am I missing??
Hi everyone, I have a script.py which requires one argument to run normally, for eg. script.py D:\Downloads\12-Dec-2022\1234\ I am intending to create a custom search command so that I can have... See more...
Hi everyone, I have a script.py which requires one argument to run normally, for eg. script.py D:\Downloads\12-Dec-2022\1234\ I am intending to create a custom search command so that I can have a Splunk Dashboard GUI which allows the user to input the file path i.e D:\Downloads\12-Dec-2022\1234\ and then it will run in the backend this --> script.py D:\Downloads\12-Dec-2022\1234\ and generate a csv file in which I will use the splunk search command to format the data.  My question would be how can I write the generator.py script so that it calls the script.py  I have a template I found:         #!/usr/bin/env python import sys import os sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "lib")) from splunklib.searchcommands import \ dispatch, GeneratingCommand, Configuration, Option, validators @Configuration() class %(command.title())Command(GeneratingCommand): """ %(synopsis) ##Syntax %(syntax) ##Description %(description) """ def generate(self): # Put your event code here # To connect with Splunk, use the instantiated service object which is created using the server-uri and # other meta details and can be accessed as shown below # Example:- # service = self.service pass dispatch(%(command.title())Command, sys.argv, sys.stdin, sys.stdout, __name__)           However, I am not sure like how to write it such that this command will accept an argument (eg. file path inputted by the user) So how I Forsee it is I have 3 things 1. Custom search command named mycommand 2. my own script.py which accepts one argument (a file path) use to run and generate stats  3. Splunk search command So once I have the custom search command mycommand I can use it in splunk search  | mycommand <user input>  something like that..however writing the custom search command am not sure how to make it accept an argument inputted for the user in the splunk gui. can anyone help please?
Hi, I have the bellow event:   {"log":"2023-02-16t14:14:25.827471424z stderr F I0216 14:14:25.827359               1 connection.go:153] connecting to UNIX:///csi/csi.sock" I need to remove ... See more...
Hi, I have the bellow event:   {"log":"2023-02-16t14:14:25.827471424z stderr F I0216 14:14:25.827359               1 connection.go:153] connecting to UNIX:///csi/csi.sock" I need to remove 2023-02-16t14:14:25.827471424z stderr F I0216 14:14:25.827359 and have tried rex but unable to do so, just wondered if someone could help me?   Thanks, Joe
Hi folks, I'm evaluating a situation related to enabling SAML auth on SOAR but earlier I was using local accounts. Because of that, objects like assets, playbooks, etc are currently tied to the loca... See more...
Hi folks, I'm evaluating a situation related to enabling SAML auth on SOAR but earlier I was using local accounts. Because of that, objects like assets, playbooks, etc are currently tied to the local user ids, and SAML users have different user ids. I'm looking for ideas on how to update that ownership from local to SAML new user id in order to have the users still owning those objects after changing their login type. Or, another option but that will be unlikely to be something doable, but if I could have the SAML login to use the same user id as the local (like one replace the other) would also be interesting to explore.
Hi Team, I have events being pushed to HTTP event collector 24/7. In my dashboard I query and format the events using transaction command based on a field traceparent. It's working fine, but the rep... See more...
Hi Team, I have events being pushed to HTTP event collector 24/7. In my dashboard I query and format the events using transaction command based on a field traceparent. It's working fine, but the report is only showing 4999 transactions. Is it a limit set on the Splunk server? Where are these limits set and are there any guidelines to increase it without impacting server performance negatively? I also observed that if by 10AM in a day I got 4999 transactions then the new transactions which came after 10AM are not displayed by the query. I have to change the timer to 'last 60 min', 'last 15 min' etc to get the latest ones. Even if my query hits the top line limit of 4999, how to make sure that those 4999 transactions are the latest (from the time the query is executed) and not the old ones? Like if run the query at 2PM, I want to get those 4999 transactions from 2PM down till 11AM etc. How to achieve that? Thank you. 
source=PR1 sourcetype="sap:abap" EVENT_TYPE=STAD EVENT_SUBTYPE=MAIN TCODE="ZORF_BOX_CLOSING" SYUCOMM="SICH_T" ACCOUNT=$s_user$ | eval RESPTI = round(RESPTI/1000,2), DBCALLTI=round(DBCALLTI/1000,2) ... See more...
source=PR1 sourcetype="sap:abap" EVENT_TYPE=STAD EVENT_SUBTYPE=MAIN TCODE="ZORF_BOX_CLOSING" SYUCOMM="SICH_T" ACCOUNT=$s_user$ | eval RESPTI = round(RESPTI/1000,2), DBCALLTI=round(DBCALLTI/1000,2) | timechart avg(RESPTI) as "Average_Execution_Time", avg(DBCALLTI) as "Average_DB_Time", max(RESPTI) AS "Max_Execution_Time", max(DBCALLTI) as "Max_DB_Time" | eval Average_Execution_Time = round(Average_Execution_Time,2), Average_DB_Time=round(Average_DB_Time,2) | eval Max_Execution_Time = round(Max_Execution_Time,2), Max_DB_Time = round(Max_DB_Time,2) | search Average_Execution_Time !="" | search Max_Execution_Time !="" this is the search that i am working with, and in this way it is working fine. However i have to add a span to it. i have a dropdown menu that has the token $span$. when i try to use it also works fine. however what i have to do is use a span that is equal to the time range picker. the token from the timerange picker is $tok_range$, however if i try to use this as a span it will just tell me search is waiting for input. Is there a way to do this?
Hello Splunkers, I am trying to create an alert when the log with "UP" state is not received within 15 minutes from the time of "DOWN" state log received. So can anyone help me out... Scenario: ... See more...
Hello Splunkers, I am trying to create an alert when the log with "UP" state is not received within 15 minutes from the time of "DOWN" state log received. So can anyone help me out... Scenario: When the device is Down the splunk will receive the log from solar-winds that the device is "DOWN" along with the host name in the log. So if the splunk doesn't receive the log that containing the "UP" state from the solar-winds in the next 15 minutes then an alert must be raised. So can anyone help me to create an Query for the alert for the above scenario. Thanks in Advance....
Hi, I'm quite fresh in splunk and need your help. Trying to combine spl with sql. tag 25 is event id same as  sql ele.batch_event_id I suspect ele.batch_event_id = $25$ is wrong. Any idea ple... See more...
Hi, I'm quite fresh in splunk and need your help. Trying to combine spl with sql. tag 25 is event id same as  sql ele.batch_event_id I suspect ele.batch_event_id = $25$ is wrong. Any idea please Error is : Unable to run query '| dbxquery query= "SELECT MIN (ele.process_time) as MIN_PROCESS_time ,MAX (ele.process_time) as MAX_PROCESS_time FROM estar.estar_loopback_events ele, estar.engine_configuration ec WHERE ele.engine_instance = ec.engine_instance AND ele.batch_event_id = $25$ AND process_time BETWEEN TO_DATE('20230215:00:00','YYYYMMDD hh24:mi:ss') and TO_DATE('20230216 12:59:59','YYYYMMDD hh24:mi:ss') " connection='stardb' '.   Search: index=star_linux sourcetype=engine_processed_events 2961= BBHCC-S2PBATCHPOS-BO OR BBHCC-S2PBATCHPOS-B2 OR BBHCC-S2PBATCHPOS-PO OR BBHCC-SOD-IF-Weekday-1 AND 55:GEN_STAR_PACE |table 4896,25,55,2961 | map search="| dbxquery query= \"SELECT MIN (ele.process_time) as MIN_PROCESS_time ,MAX (ele.process_time) as MAX_PROCESS_time FROM estar.estar_loopback_events ele, estar.engine_configuration ec WHERE ele.engine_instance = ec.engine_instance AND ele.batch_event_id = $25$ AND process_time BETWEEN TO_DATE('20230215:00:00','YYYYMMDD hh24:mi:ss') and TO_DATE('20230216 12:59:59','YYYYMMDD hh24:mi:ss') \" connection='stardb' " |table 4896, 25,MIN_PROCESS_time, MAX_PROCESS_time
Hi Splunk Gurus,  I am new to lookups and this community has been a great help. I have a few cases where I can't seem to remove rows from a lookup correctly and I can't find a solution for it. I ... See more...
Hi Splunk Gurus,  I am new to lookups and this community has been a great help. I have a few cases where I can't seem to remove rows from a lookup correctly and I can't find a solution for it. I have a lookup table that is used to list maintenance windows on servers. My CSV lookup has 3 columns CI,  chgreq, mStart, and mstop. Example: serverA     CHG0001     2023-02-16 00:00     2023-02-17 13:00 I am pulling in emails from an O365 mailbox that allows the adding and clearing of these maintenance windows. Adding new rows to my lookup is working fine but when I try to remove rows I get a blank lookup. Here is the search I am using: index="maintenance_emails" Clear Maintenance | rex field="subject" "Clear Maintenance for (?<server_name>.+)" | inputlookup append=t maintenance_windows.csv | where CI!=server_name | eval CI=server_name, chgreq=chgreq, mStart=mStart, mStop=mStop | outputlookup maintenance_windows.csv   The server_name field has the correct server name in it and it matches with a CI entry in my lookup. When I run the search I get a blank lookup table. I have done some testing and it looks like my where statement is not working. I appear to also be having the same issue when trying to remove old maintenance window entries from the same table but using values in the mStop column and comparing them to the current date and time. But this may be a separate issue (i.e. with the date/time format or operation). | eval cur_time=strftime(now(), "%Y-%m-%d %H:%M") | inputlookup append=t maintenance_windows.csv | where mStop<=cur_time | eval CI=server_name, chgreq=chgreq, mStart=mStart, mStop=mStop | outputlookup maintenance_windows.csv   Any help would be very appreciated  
Hello Splunk Community,  So I have a table that has results like below   Name                Tom01 Tom02 Tom03 Tom04 Quin01 Yonah01 Yonah02   I want a query that if the text mat... See more...
Hello Splunk Community,  So I have a table that has results like below   Name                Tom01 Tom02 Tom03 Tom04 Quin01 Yonah01 Yonah02   I want a query that if the text matches before the numeric' s it will only select the 01 and ignore the other ones. For example: IF Yonah01 and Yonah02 exist this is a pair so it will exclude Yonah02 and just have Yonah01  or another one, if there is Tom01, Tom02, Tom03, Tom04 it will exclude everything except for the Tom01. Thank you.