All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Jenkins health dashboard is using index="jenkins_statistics  in base search instead of macro `jenkins_index` unlike previous version. Because of this change, the dashboard is now showing up any data ... See more...
Jenkins health dashboard is using index="jenkins_statistics  in base search instead of macro `jenkins_index` unlike previous version. Because of this change, the dashboard is now showing up any data in Splunk cloud.
Hi Splunkers, I have a very simple question. When I configure a Splunk indexes.conf, I know that one parameter I can configure is repFactor. In a scenario where SmartStore is used, we know that rep... See more...
Hi Splunkers, I have a very simple question. When I configure a Splunk indexes.conf, I know that one parameter I can configure is repFactor. In a scenario where SmartStore is used, we know that repFactor must be set equals to "auto", for each configured index. Here the question is this: following Splunk official documentation, repFactor is put under "Per index Options". Does it means that I cannot put it under [default] stanza? Because if, for SmartStore requirements, I need to configure it equals to auto for EVERY index, it could be fast and smart put it as a global setting. Luca  
i have a sample xml which looks like this   script_family>Amazon Linux Local Security Checks</script_family> <filename>al2023_ALAS2023-2025-816.nasl</filename> <script_version>1.1</script_version> ... See more...
i have a sample xml which looks like this   script_family>Amazon Linux Local Security Checks</script_family> <filename>al2023_ALAS2023-2025-816.nasl</filename> <script_version>1.1</script_version> <script_name>Amazon Linux 2023 : runfinch-finch (ALAS2023-2025-816)</script_name> <script_copyright>This script is Copyright (C) 2025 and is owned by Tenable, Inc. or an Affiliate thereof.</script_copyright> <script_id>214620</script_id> <cves> <cve>CVE-2024-45338</cve> <cve>CVE-2024-51744</cve> </cves> <bids> </bids> <xrefs> </xrefs> <preferences> </preferences> <dependencies> <dependency>ssh_get_info.nasl</dependency> </dependencies> <required_keys> <required_key>Host/local_checks_enabled</required_key> <required_key>Host/AmazonLinux/release</required_key> <required_key>Host/AmazonLinux/rpm-list</required_key> </required_keys> <excluded_keys> </excluded_keys> <required_ports> </required_ports> <required_udp_ports> </required_udp_ports> <attributes> <attribute> <name>exploitability_ease</name> <value>No known exploits are available</value> </attribute> <attribute> <name>cvss3_temporal_vector</name> <value>CVSS:3.0/E:U/RL:O/RC:C</value> </attribute> <attribute> <name>vuln_publication_date</name> <value>2024/11/04</value> </attribute> <attribute> <name>cpe</name> <value>p-cpe:/a:amazon:linux:runfinch-finch cpe:/o:amazon:linux:2023</value> </attribute> <attribute> <name>cvss3_vector</name> <value>CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:U/C:L/I:N/A:N</value> </attribute i want to extract few fields from search using spath or similar methods field value payer should look somthing like key                                                               value exploitability_ease                             No known exploits are available cvss3_temporal_vector                  CVSS:3.0/E:U/RL:O/RC:C solution                                                    Run 'dnf update runfinch-finch --releasever 2023.6.20250123' to update        i tried something similar to this but no luck | spath input=_raw path="attributes.attribute[*].name" output=name | spath input=_raw path="attributes.attribute[*].value" output=value | table name value
For some time now, the class server agents on my deployment server have been in Pending status, and yet the logs are still coming. Does anyone know why?   Yet when I look at the forwarders page... See more...
For some time now, the class server agents on my deployment server have been in Pending status, and yet the logs are still coming. Does anyone know why?   Yet when I look at the forwarders page, alls agents status are Ok!     I just don't get it!
Hi, Our project is planning to have Splunk ITSI to do batch monitoring from Control M jobs and have autohealing as well. Would that be feasible with Splunk ITSI? Does Splunk ITSI have capabilities t... See more...
Hi, Our project is planning to have Splunk ITSI to do batch monitoring from Control M jobs and have autohealing as well. Would that be feasible with Splunk ITSI? Does Splunk ITSI have capabilities to take action like running a custom script to force restart, or force OK a Control M job once conditions are met to be ? Looking forward to your insights.
Hello, Is it possible to configure a Universal Forwarder to automatically discover the location of weblogs for IIS or Apache? I can programmatically get the locations and have a script for Windows a... See more...
Hello, Is it possible to configure a Universal Forwarder to automatically discover the location of weblogs for IIS or Apache? I can programmatically get the locations and have a script for Windows and Linux that returns a list of locations.  Kind Regards Andre
I have created a index by CLI ( script)on custom application but the index is not reflecting in Splunk gui 
Dear Splunker i need a search that gets me if  theres a host that has these logs, below is a psudeo search that show what i really want: index=linux host=* sourcetype=bash_history AND ("systemc... See more...
Dear Splunker i need a search that gets me if  theres a host that has these logs, below is a psudeo search that show what i really want: index=linux host=* sourcetype=bash_history AND ("systemctl start" OR "systemctl enable") | union [search index=linux host=* sourcetype=bash_history (mv AND /opt/ ) ] just to make more clearer, i want a match only  if a server generated a log that contains "mv AND /opt/" and another log that contains "systemctl start" OR "systemctl enable"       thanks in advance
Hello everyone,   I’m having trouble getting Splunk to recognize timestamps correctly, and I hope someone can help me out. I’m importing an access log file, where the timestamps are formatted like ... See more...
Hello everyone,   I’m having trouble getting Splunk to recognize timestamps correctly, and I hope someone can help me out. I’m importing an access log file, where the timestamps are formatted like this:   [01/Jan/2017:02:16:51 -0800] here also a live output: However, Splunk is not recognizing these timestamps and instead assigns the indexing time.   I have tried adjusting the settings in the sourcetype configuration (see screenshot) and have set the following values: • Timestamp format: %d/%b/%Y:%H:%M:%S %z • Timestamp prefix: \[ • Lookahead: 32   Unfortunately, the timestamps are still not recognized correctly. Do I need to modify props.conf or inputs.conf as well? Is my timestamp format correct, or should it be defined differently? Could there be another issue in my extraction settings?   The log file looks like this: Should I maybe change the log file with some scripting in order to change the format?   I would really appreciate any guidance! Thank you in advance.   Best regards
Hello, Is there any way to get fieldname and its expression from datamodel using rest api(using splunk query)? I am already using this query but here fields and its expressions are shuffled.   ... See more...
Hello, Is there any way to get fieldname and its expression from datamodel using rest api(using splunk query)? I am already using this query but here fields and its expressions are shuffled.   | datamodel | spath output=modelName modelName |search modelName=Network_Traffic |rex max_match=0 field=_raw "\[\{\"fieldName\":\"(?<fields>[^\"]+)\"" |rex max_match=0 field=_raw "\"expression\":\"(?<expression>.*?)\"}" |table fields expression        
We operate by using scheduled searches to periodically search through logs collected by Splunk, and trigger actions when log entries matching certain conditions are found. You can create a list of a... See more...
We operate by using scheduled searches to periodically search through logs collected by Splunk, and trigger actions when log entries matching certain conditions are found. You can create a list of actions triggered recently (for example, within the past week) by searching for alert_fired="alert_fired" in the _audit index. At this time, is it possible to join the log entries that matched in each search execution to the list? (I want to know the result of "| loadjob <sid>" for each search.) The expected output is a table with the search execution time (_time), the search name (ss_name), and the log entries.
Dear splunkers, When set useAck = true (https://docs.splunk.com/Documentation/Splunk/9.4.0/Forwarding/Protectagainstlossofin-flightdata). The source peer sends acknowledgment after writing the data... See more...
Dear splunkers, When set useAck = true (https://docs.splunk.com/Documentation/Splunk/9.4.0/Forwarding/Protectagainstlossofin-flightdata). The source peer sends acknowledgment after writing the data to its file system and ensuring the replication factor is met  or The source peer sends acknowledgment after writing the data to its file system.   Best regards,
Hi team, Today, we found an error was thrown when we tried to upload splunk app from file (The file is downoaded from https://classic.splunkbase.splunk.com/app/4241/) Here is the error: " Fil... See more...
Hi team, Today, we found an error was thrown when we tried to upload splunk app from file (The file is downoaded from https://classic.splunkbase.splunk.com/app/4241/) Here is the error: " File Transfer Blocked The file you are trying to download or upload has been blocked in accordance with company policy. Please contact your system administrator if you believe this is an error. " However, it works fine if upload from this UI:     The file is exactly the same.. Why i cannot upload from file? We have tried with Splunk 8.2.7 and Splunk 9.2.1. We are pretty sure everything works fine before(~6 month ago)    Could you please help here? thank you!
Hei, Getting these messages constantly:  Splunk Version 9.4.0 - Running on Windows LogFile: python.log 2025-01-31 23:24:17,145 +0100 WARNING splunk_internal_telemetry:53 - Failed to send telemetr... See more...
Hei, Getting these messages constantly:  Splunk Version 9.4.0 - Running on Windows LogFile: python.log 2025-01-31 23:24:17,145 +0100 WARNING splunk_internal_telemetry:53 - Failed to send telemetry event: [HTTP 401] Client is not authenticated 2025-01-31 23:24:17,146 +0100 INFO decorators:130 - loading uri: /en-us/custom/splunk_app_stream/ping/   web_service.log 2025-01-31 23:29:39,276 INFO [679d4ed33f235afb52220] decorators:130 - loading uri: /en-us/custom/splunk_app_stream/ping/ 2025-01-31 23:29:45,106 WARNING [679d4ed914235b03d1d60] splunk_internal_telemetry:53 - Failed to send telemetry event: [HTTP 401] Client is not authenticated 2025-01-31 23:29:45,108 INFO [679d4ed914235b03d1d60] decorators:130 - loading uri: /en-us/custom/splunk_app_stream/ping/ 2025-01-31 23:29:50,167 WARNING [679d4ede26235b0268070] splunk_internal_telemetry:53 - Failed to send telemetry event: [HTTP 401] Client is not authenticated 2025-01-31 23:29:50,169 INFO [679d4ede26235b0268070] decorators:130 - loading uri: /en-us/custom/splunk_app_stream/ping/ 2025-01-31 23:29:55,246 WARNING [679d4ee338235b0268130] splunk_internal_telemetry:53 - Failed to send telemetry event: [HTTP 401] Client is not authenticated 2025-01-31 23:29:55,248 INFO [679d4ee338235b0268130] decorators:130 - loading uri: /en-us/custom/splunk_app_stream/ping/
I have a query From source A that i need to get a list of 3 parameters back and for one of these parameters which is a ID and i need to get the the actual name of the object from another query from s... See more...
I have a query From source A that i need to get a list of 3 parameters back and for one of these parameters which is a ID and i need to get the the actual name of the object from another query from source B using this ID. Eventually i need  i want to create a table to print the 3 parameter including the name also. Any help would be greatly appreciated?
Hi Guys, Need a help i am trying to check my suppression list in rest endpoint i have almost 100+ suppression showing in the notable suppression. but i see only very few 20-30 suppression in the e... See more...
Hi Guys, Need a help i am trying to check my suppression list in rest endpoint i have almost 100+ suppression showing in the notable suppression. but i see only very few 20-30 suppression in the endpoint https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions is there a way to see all 100+ suppression in the endpoint
I am trying to suppress some specifc exceptions in Business transactions until the developers can handle them in code, because they are messing up my Availability percentages. And although I seem to... See more...
I am trying to suppress some specifc exceptions in Business transactions until the developers can handle them in code, because they are messing up my Availability percentages. And although I seem to be able to suppress the errors so that they don't show up in the Tier counting against availability, the seem to continue to show up in the Business Transaction, and in Service Endpoints. If I have successfully suppressed an exception so that it no longer counts against Availabilty in the Tier, should that error also be suppressed in Business Transactions and Service Endpoints? I need to have them suppressed in the Service Endpoints, primarily, because I have Custom Service Endpoints set up for api calls for particular clients, for example.  But even though I suppress the errors so that they no longer show up in the tier, they still show up in BT's and SEP's. Is there a way to suppress an error so that it no longer counts as an error in BT's and SEP's? Thanks.
Dashboard studio gives me the ability to drop panels and and move them around, which I love.  I can drag a panel on top of another and quickly create two equal size panels, each 50% of the size of th... See more...
Dashboard studio gives me the ability to drop panels and and move them around, which I love.  I can drag a panel on top of another and quickly create two equal size panels, each 50% of the size of the dashboard.  If I drag a 3rd panel into the same area though, I get three panels, one of which is 50% of the screen, and the other two are 25% each.  Is it possible to get them to be three equal sizes (~33%) or is my only option to fiddle with the sliders a bit and settle for good enough?
I have few Dashboards in splunk I want to play them on TV. Expectation is dashboard 1 will be shown then after 1 sec gap dashboard 2 will appear on screen then again pause for few seconds and dashbo... See more...
I have few Dashboards in splunk I want to play them on TV. Expectation is dashboard 1 will be shown then after 1 sec gap dashboard 2 will appear on screen then again pause for few seconds and dashboard 3 will come.   if not possible through splunk then how can I achieve this?  
I have created a dashboard that reads a MQ flow that contains messages to different vendors.  I have created panels for the different vendors and am trying to group the messages for each of those ven... See more...
I have created a dashboard that reads a MQ flow that contains messages to different vendors.  I have created panels for the different vendors and am trying to group the messages for each of those vendors.  Each Vendor will receive 2 message types ASM and SSM.  2 panels work but the other does not, it only returns NULL yet there are messages. The search is exactly the same for all three with the exception of the Vendor address, here is the search index="emh_prd" ACXForm="TTYIN:MULEOUT:TTYOUT" XXXXXX AND .YYYYYY | timechart count by DR1 The XXXXXX is the Vendor address and .YYYYYYY is the sender address.  The sender address will stay the same but the each panel will have a different XXXXXX value I can not figure out why only that 1 does not work and returns NULL when it receives basically the same messages just with a different XXXXXX value   I hope someone here can help me