All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

HI  Can someone please help me to extract the multiple fields from a single backslash separated field using rex command.  FIELD1 = ABCD/EFGH/IJ/KL/MN/OP/QRST How to create the multiple fields usin... See more...
HI  Can someone please help me to extract the multiple fields from a single backslash separated field using rex command.  FIELD1 = ABCD/EFGH/IJ/KL/MN/OP/QRST How to create the multiple fields using the field FIELD1 as below : Field_1 = ABCD  Field_2 = EFGH Field_3 = IJ Field_4 = KL Field_5 = MN Field_6 = OP Field_7 = QRST      
HI  Can someone please let me know how I can use the below expression (generated via Field Extraction) directly via Rex command:  Regular expression generated via Field extraction:   ^(?:[^,\... See more...
HI  Can someone please let me know how I can use the below expression (generated via Field Extraction) directly via Rex command:  Regular expression generated via Field extraction:   ^(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH>[^"]+)   I am using the rex command as below but i am getting an error :    | rex field=Message mode=sed "(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH1>[^"]+)"  
Greetings , Does anyone know if it's possible to create a script that writes splunk search quey based on the alerts results / table, for example: "Multiple Failure Attempts"  uses "Authentication" ... See more...
Greetings , Does anyone know if it's possible to create a script that writes splunk search quey based on the alerts results / table, for example: "Multiple Failure Attempts"  uses "Authentication" data model to display results and only shows specific fields as : username , total failure attempts, source ip, destination..etc. But I want to conduct more investigation and check raw logs to see more fields so I have to write a new search query with specifying fields and their values to get all information. (index=* sourcetype=xxx user=xxx dest=xxx srcip=xxx) then look for more fields under the displayed results. And I would like to automate this process. Any suggestions for Apps, Scripts, recommended programming language?   
We have an issue where we created a default frozen folder instead of frozen for each index, now we have some data in our frozen folder and we want to resotre it back to searchable data. how can i ide... See more...
We have an issue where we created a default frozen folder instead of frozen for each index, now we have some data in our frozen folder and we want to resotre it back to searchable data. how can i identify the index name of that data or if i cant identify the index name how to restore it to a random index.
Hello Everyone, I have following splunk query, which I am trying to build for dropdown in dashboard. Basically 2 dropdowns, the 1st dropdown has got static value which is index names:  index_1 , ind... See more...
Hello Everyone, I have following splunk query, which I am trying to build for dropdown in dashboard. Basically 2 dropdowns, the 1st dropdown has got static value which is index names:  index_1 , index_2 , index_3 Based on the selected index,  I am trying to run the splunk query:   index="index_1" | eval hostname_pattern=case( index == "index_1","*-hostname_1", index == "index_2","*-hostname_2" ) | search hostname= hostname_pattern   the search always return empty. However if I run the direct query for index_1 or index_2 with its relevant hostname, it works and returns me results   index="index_1" | search hostname= "*-hostname_1"    For the sake of checking if my condition is working or not, I fed the output of eval case into table. And checked by passing relevant indexes (index_1 or index_2)   index="index_1" | eval hostname_pattern=case( index == "index_1","*-hostname_1", index == "index_2","*-hostname_2" ) | stats count by hostname_pattern | table hostname_pattern | sort hostname_pattern   returns *-hostname_1 Not sure how do we pass the hostname value based on selected index for search. Highly appreciate your help.
Hi Guys I have issue for the newly setup HF and UF. The windows UF’s logs are reaching the Indexers while the Linux UF are not. Communication is ok between LiNux UF and HF as observed using tcpdum... See more...
Hi Guys I have issue for the newly setup HF and UF. The windows UF’s logs are reaching the Indexers while the Linux UF are not. Communication is ok between LiNux UF and HF as observed using tcpdump. The linux UF is sending traffics and HF received and process it. can you help what needs to check on UF or HF?
Hello, I am following this tutorial to create a Splunk app using React on macOS Sonoma: https://splunkui.splunk.com/Toolkits/SUIT/AppTutorial However, I am not able to get it to work. The 'start' v... See more...
Hello, I am following this tutorial to create a Splunk app using React on macOS Sonoma: https://splunkui.splunk.com/Toolkits/SUIT/AppTutorial However, I am not able to get it to work. The 'start' view is simply not added to the app views on Splunk, even though they are there in the files in my app. I wasn't even able to launch the app before I set it to 'Visible' by going to 'Manage Apps' and editing its properties. It should have been visible because it is set as such under my app.conf. But after I launched it, I was redirected to the search page (image below). If I go to the URL http://localhost:8000/en-US/app/my-splunk-app/start, I get the 'Page not found' error page. Could someone please help me with this?
My query returns these events, i need to compute the total time A was in this state and total time B was in this state. My thought is to subtract the TImestamp of the first A from the most recent A a... See more...
My query returns these events, i need to compute the total time A was in this state and total time B was in this state. My thought is to subtract the TImestamp of the first A from the most recent A and so on for B but cant figure out the right way to do this?   Timestamp Job Date LoggedTime Ready 1728092168.000000 A 10/4/2024 21:36:03 1 1728092163.000000 A 10/4/2024 21:35:50 1 1728092150.000000 A 10/4/2024 21:35:27 1 1728092127.000000 A 10/4/2024 21:35:16 1 1728090335.000000 B 10/4/2024 21:05:15 2 1728090315.000000 B 10/4/2024 21:05:03 2 1728090303.000000 B 10/4/2024 21:04:53 2 1728090293.000000 B 10/4/2024 21:04:31 2
Trying to monitor a separate print server folder outside where Splunk is hosted with print logs that has a UNC path.  Folder only has .log files in it.  I have the following index created: index = ... See more...
Trying to monitor a separate print server folder outside where Splunk is hosted with print logs that has a UNC path.  Folder only has .log files in it.  I have the following index created: index = printlogs When I try to add the folder path in Splunk through the add data feature: "add data" - "Monitor" -"Files & Directories" I get to submit and then get an error: "Parameter name:  Path must be absolute". So I added the following stanza to my inputs.conf file in the systems/local/folder: [monitor://\\cpn-prt01\c$\Program Files\Printer\server\logs\print-logs\*.log] index = printlogs host = cpn-prt01 disabled = 0 renderXml = 1 I created a second stanza with a index = printlogs2 with respective index to monitor the following path to see if I can pull straight from the path and ignore the file type inside. [monitor://\\cpn-prt01\c$\Program Files\Printer\server\logs\print-logs\] I do see the full path to both in the "Files & Director" list under the Data Inputs.  However, I am not getting any event counts when I look at the respective indexes seen in the Splunk Indexes page.   I did a Splunk refresh and even restarted the Splunk server with now luck.   Thought maybe someone has run into similar issue or has a possible solution.   Thanks in advance.
Hello,  I am running two separate queries to extract values: First query   index=abc status=error | stats count AS FailCount   Second query   index=abc status=planning | stats count AS TotalP... See more...
Hello,  I am running two separate queries to extract values: First query   index=abc status=error | stats count AS FailCount   Second query   index=abc status=planning | stats count AS TotalPlanned   Both queries are working well and giving expected results.  When I combine them using sub search, I am getting error:   index=abc status=error | stats count AS FailCount [ search index=abc status=planning | stats count AS TotalPlanned | table TotalPlanned ] | eval percentageFailed=(FailCount/TotalPlanned)*100   Error message:   Error in 'stats' command: The argument '(( TotalPlanned=761 )) is invalid'   Note: The count 761 is a valid count for TotalPlanned, so it did perform that calculation. 
User receiving duplicated field names in splunk result for example when i run a search i get an output for the       field1=Value1 and then when the user runs the same search he gets an output of f... See more...
User receiving duplicated field names in splunk result for example when i run a search i get an output for the       field1=Value1 and then when the user runs the same search he gets an output of field1 = "field1=value1" Does any one knows what i need to do to help the user get the same result as mine 
Issue after upgrade from HF splunk 9.2.1 to 9.2.2. OS Running on Redhat 8.10 latest kernel version Tried to change give permissions to splunk folder. Tried to set to permissive mode the sestatus. ... See more...
Issue after upgrade from HF splunk 9.2.1 to 9.2.2. OS Running on Redhat 8.10 latest kernel version Tried to change give permissions to splunk folder. Tried to set to permissive mode the sestatus. [afmpcc-prabdev@sgmtihfsv001 splunk]$ sudo -u splunk /mnt/splunk/splunk/bin/splunk start --accept-license --answer-yes Error calling execve(): Permission denied Error launching systemctl show command: Permission denied This appears to be an upgrade of Splunk. --------------------------------------------------------------------------------) Splunk has detected an older version of Splunk installed on this machine. To finish upgrading to the new version, Splunk's installer will automatically update and alter your current configuration files. Deprecated configuration files will be renamed with a.deprecated extension. You can choose to preview the changes that will be made to your configuration files before proceeding with the migration and upgrade: If you want to migrate and upgrade without previewing the changes that will be made to your existing configuration files, choose 'y'. If you want to see what changes will be made before you proceed with the upgrade, choose 'n'. Perform migration and upgrade without previewing configuration changes? [y/n] y Can't run "btool server list clustering --no-log": Permission denied [afmpcc-prabdev@sgmtihfsv001 splunk]$[afmpcc-prabdev@sgmtihfsv001 splunk]$ sudo -u splunk /mnt/splunk/splunk/bin/splunk btool server list clustering --no-log execve: Permission denied while running command /mnt/splunk/splunk/bin/btool [afmpcc-prabdev@sgmtihfsv001 splunk]$
I am trying to track a set of service desk ticket status across time.  The data input is a series of ticket updates that come in as changes occur.  Here is a snapshot:   What I'd like to do... See more...
I am trying to track a set of service desk ticket status across time.  The data input is a series of ticket updates that come in as changes occur.  Here is a snapshot:   What I'd like to do with this is get a timechart with the status at each time point, however, I have an issue of the "blank" time events being filled in with zeros, whereas I need the last valid value instead.  My naive query is: index="jsm_issues" | sort -_time | dedup _time key | timechart count(fields.status.name) by fields.status.name Which gives me:   How can I query to get these zeros filled in with the last valid count ticket statuses? Some things I've tried with no success: Some filldown kludges usenull=f on the timechart A million other suggestions on this forum that usually involve a simpler query     Any suggestions?  Thanks!
Hello to everyone! Today I noticed strange messages in the daily warn and errors report:   10-04-2024 16:55:01.935 +0300 WARN UserManagerPro [5280 indexerPipe_0] - Unable to get roles for user= b... See more...
Hello to everyone! Today I noticed strange messages in the daily warn and errors report:   10-04-2024 16:55:01.935 +0300 WARN UserManagerPro [5280 indexerPipe_0] - Unable to get roles for user= because: Could not get info for non-existent user="" 10-04-2024 16:55:01.935 +0300 ERROR UserManagerPro [5280 indexerPipe_0] - user="" had no roles   I checked that this couple first appeared 5 days ago, but this fact can't help me because I don't remember what I changed in the exact day. I also tried to find some helpful "nearby" events that can help me to understand the root case, but didn't observe anything interesting. Which ways do I have to investigate this case? Maybe I can "rise" log policy to DEBUG lvl? If I can, what should I change and where? Little more information: I have searchhead cluster with LDAP authorization And also indexer cluster only with local users
I am looking for an example for using Bearer Authentication within python using helper.send_http_request in the Splunk addon builder. All the example I have found so far have "headers=None".     Py... See more...
I am looking for an example for using Bearer Authentication within python using helper.send_http_request in the Splunk addon builder. All the example I have found so far have "headers=None".     Python helper functions https://docs.splunk.com/Documentation/AddonBuilder/4.3.0/UserGuide/PythonHelperFunctions 
 i have a query that will calculate the volume of data ingested in a sourcetype--   index=federated:infosec_apg_share source=InternalLicenseUsage type=Usage idx=*_p* idx!=zscaler* st=<your source... See more...
 i have a query that will calculate the volume of data ingested in a sourcetype--   index=federated:infosec_apg_share source=InternalLicenseUsage type=Usage idx=*_p* idx!=zscaler* st=<your sourcetype here> | stats sum(b) | eval GB = round('sum(b)'/1073741824,2) | fields GB The issue is  I have a list of 1200 sourcetypes . please suggest me how can I adjust the entire list into this query   
Hello community, I need to set up a dashboard that tracks the status of an alert from Splunk OnCall. An alert can have 2 to 3 statuses and I would like to retrieve the _time of each step and keep it... See more...
Hello community, I need to set up a dashboard that tracks the status of an alert from Splunk OnCall. An alert can have 2 to 3 statuses and I would like to retrieve the _time of each step and keep it in memory for each state (to make duration calculations in particular) : I manage to retrieve the _time for each state in a dedicated field but I cannot transfer this value to the other states:   index=oncall_prod originOnCall="Prod" incidentNumber=497764 | sort _time desc | rex field=entityDisplayName "(?<Priorité>..) - (?<Titre>.*)" | eval startAlert = if(alertType == "CRITICAL", _time, "") | eval startAlert = strftime(startAlert,"%Y-%m-%d %H:%M:%S ") | eval ackAlert = if(alertType == "ACKNOWLEDGEMENT", _time, "") | eval ackAlert = strftime(ackAlert,"%Y-%m-%d %H:%M:%S ") | eval endAlert = if(alertType == "RECOVERY", _time, "") | eval endAlert = strftime(endAlert,"%Y-%m-%d %H:%M:%S ") | table _time, incidentNumber, alertType, Priorité, Titre, startAlert, ackAlert, endAlert, ticket_EV   Do you have any idea how to do this? I searched the forum but couldn't find a solution that matched my problem. Sincerely, Rajaion
Hi. We are starting to use Splunk Infrastructure monitoring, and want to deploy the Otel-Collector using our existing Splunk infrastructure (Deployment Server). We would really like to send the Ote... See more...
Hi. We are starting to use Splunk Infrastructure monitoring, and want to deploy the Otel-Collector using our existing Splunk infrastructure (Deployment Server). We would really like to send the Otel data to IM using a HTTP_PROXY, but do not want to change the dataflow for the entire server, so only a local HTTP_PROXY for the otel-collector. As I read the documentation you need to set environment variables for the entire server and not just the otel-collector process. Has anyone any experience using HTTP_PROXY and Otel-Collector?   Kind regards las
I've seen someone use this traffic search function but can't find it myself: How can I access this traffic search function? I know that I can run a search to get the same result but would like ... See more...
I've seen someone use this traffic search function but can't find it myself: How can I access this traffic search function? I know that I can run a search to get the same result but would like to be able to use this handalso.
I have a lookup table that we update on daily basis with two fields that are relevant here, NAME and ID.  NAME ID Toronto 765 Toronto 1157 Toronto 36   I need to pull data from ... See more...
I have a lookup table that we update on daily basis with two fields that are relevant here, NAME and ID.  NAME ID Toronto 765 Toronto 1157 Toronto 36   I need to pull data from an index and filter for these three IDs. Normally I would just do  <base search> | lookup lookup_table ID OUTPUT NAME | where NAME = "Toronto" This works, but the search takes forever since the base search is pulling records from everywhere, and filtering afterward.  I'm wondering if it's possible to do something like this (psuedo code search incoming) index=<index> ID IN ( |[inputlookup lookup_table where NAME = "Toronto"]) Basically, I'm trying to save time by not pulling all the records at the beginning and instead filter on a dynamic value that I have to grab from a lookup table.