All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have problem with Analyst queue: I am not able to add column to Analyst Queue in GUI. When I do this (using the cogwheel icon on Analyst queue dashboard), column is added, but when I log o... See more...
Hello, I have problem with Analyst queue: I am not able to add column to Analyst Queue in GUI. When I do this (using the cogwheel icon on Analyst queue dashboard), column is added, but when I log off and log on again, previously added column disseapers and Analyst queue is in default setting again. I expected that new config of Analyst Queue will be saved in $SPLUNK_HOME/etc/apps/SA-ThreatIntelligence/local/log_review.conf, but I found that this file remains untouched when I add new column. Is there any way how to add new column to the AQ permanently? Now I am aware only about one way - manually edit $SPLUNK_HOME/etc/apps/SA-ThreatIntelligence/local/log_review.conf file (this works), but this is not usable for analysts who would like to customize AQ GUI and have not privilege to edit config files. Environment is Search Head cluster (3 nodes) with Splunk Enterprise 9.3.0 and Enterprise Security 8.1.0. Any hint would be highly appreciated. Best regards Lukas Mecir
Hi Splunk Community I am  using the Splunk Add-on for ServiceNow v8.0.0 in a Splunk Cloud environment and have correctly configured a custom input for the change_request table with: timefield = s... See more...
Hi Splunk Community I am  using the Splunk Add-on for ServiceNow v8.0.0 in a Splunk Cloud environment and have correctly configured a custom input for the change_request table with: timefield = sys_updated_on filter_parameters = sysparm_query=ORDERBYDESCsys_updated_on&sysparm_limit=100 Despite this, logs in _internal consistently show the add-on attempting to use: change_request.change_request.sys_updated_on Example from splunkd_access.log: 200 GET /change_request.change_request.sys_updated_on This causes data not to be ingested, despite 200 HTTP responses. The correct behavior should use change_request as the table and include sys_updated_on in the query string, not the URL path. Request: Please confirm if its a know issue and if there is a workaround for this? Thank you. Thank you.
Hello, I'm looking to secure the connection to our deployment server using HTTPS following this doc: https://docs.splunk.com/Documentation/Splunk/9.4.2/Security/Securingyourdeploymentserverandclien... See more...
Hello, I'm looking to secure the connection to our deployment server using HTTPS following this doc: https://docs.splunk.com/Documentation/Splunk/9.4.2/Security/Securingyourdeploymentserverandclients I'm wondering if having client certificate is mandatory or if it would be possible to only install a certificate on the DS server itself ? I don't need the to have mTLS, my goal is only to have an encrypted connection between the server and the clients. Thanks for you help Lucas
Hello (excuse me for my english it's not my native language) I upgrade splunk entreprise from 9.2.1 to 9.3.4 Everything went right, each server was upgraded without any problem (linux and windows)... See more...
Hello (excuse me for my english it's not my native language) I upgrade splunk entreprise from 9.2.1 to 9.3.4 Everything went right, each server was upgraded without any problem (linux and windows). But when i click on Settings>Users and authentication>Roles  (same for users) , the panel with the list of users is empty I can see only the top menu of splunk, the remaining of the page is blank. This occurs when the language is set to French  (fr-FR) , in english (en-US) it's ok. Can somebody help me ? Where can i find the language pack, to see if something's missing? thank you
The Splunk app for Linux already provided a stanza for collecting all the .log files in the /var/log folder ([monitor::///var/log]). But what if I want to write specific regex/transformations for spe... See more...
The Splunk app for Linux already provided a stanza for collecting all the .log files in the /var/log folder ([monitor::///var/log]). But what if I want to write specific regex/transformations for specific .log file, given its path. For example, I want to apply transformation by writing specific stanzas in props.conf and transforms.conf for file /var/log/abc/def.log and /var/log/abc/ghi.log.  How to make these have the same sourcetype as "alphabet_log" and then write its regex functions? I also have a question regarding the docs from Splunk In the props.conf docs, it stated that: For settings that are specified in multiple categories of matching [<spec>] stanzas, [host::<host>] settings override [<sourcetype>] settings. Additionally, [source::<source>] settings override both [host::<host>] and [<sourcetype>] settings.  What does "override" here mean? Does it override everything, or it combines and only override the duplicate configs?
Trying to filter out all perfmon data using ingest actions. so, i try and see the samples and i get this error  I checked to see if my forwarders have the same pass4SymmKey and they did. I am no... See more...
Trying to filter out all perfmon data using ingest actions. so, i try and see the samples and i get this error  I checked to see if my forwarders have the same pass4SymmKey and they did. I am not sure what to do im checking now to ensure the FW isnt blocking communication but i think that is unlikely. I can see the servers in forwarder management picking up the deployment apps from the indexer. anyone have any ideas??
We have recently updated our deployment server to version 9.4.1. Whenever page loads the default view has GUID of the clients lacking hostname and IP. Every time you have to click the gear on the ri... See more...
We have recently updated our deployment server to version 9.4.1. Whenever page loads the default view has GUID of the clients lacking hostname and IP. Every time you have to click the gear on the right side to select the extra fields. This is not persistent and you sometimes have to do it again. How do we make it persistent?
There is a process I'm trying to track. It starts by generating a single event. Then asynchronously a second event is created. My problem is that the async process often fails. I would like to find a... See more...
There is a process I'm trying to track. It starts by generating a single event. Then asynchronously a second event is created. My problem is that the async process often fails. I would like to find all occurences of the first event that do not have a corresponding second event. I know how to search for each event independently. They share a couple common identifiers that can be extracted. I have tried a subsearch and a join but have not gotten any results. As a compressed and simplified example, here is my pseudo search index=idx1 ... (identifiers here) | rex "EventId: (?<event_id>\d+)" | join type=left event_id [ search index=idx1 ... (identifiers here) | rex "\"EventId\",\"value\":\"(?<event_id>\d+)" ] Both events occur at about the same time, usually within a second. They share the EventId extracted field which can be considered unique within the time period I'm searching. Limits are not an issue as this process occurs about 100 times a day. So how can I list out the EventIds from the main search that do not have a match in the second search? Thank you experts!
Hi everyone. I have a token called "schedule_dttm" that has two attributes: "earliest" and "latest". By default, "schedule_dttm.latest" is initialized with "now()", but it can hold data in three d... See more...
Hi everyone. I have a token called "schedule_dttm" that has two attributes: "earliest" and "latest". By default, "schedule_dttm.latest" is initialized with "now()", but it can hold data in three different formats: the "now" I just mentioned, a specific epoch timestamp and a relative timestamp such as "-1h@h". My goal is to convert all of them to epoch timestamp, so the second case is trivial for me. But how do I (1) check which format is the date in and (2) create a logic to convert it properly conditionally based on the format its at? Thanks in advance, Pedro
I have created Studio Dashboard in Splunk cloud. I have created multiple panels in each tab, for example 10 panel per tab in single Studio Dashboard. Is there any way we can configure to auto-rotate ... See more...
I have created Studio Dashboard in Splunk cloud. I have created multiple panels in each tab, for example 10 panel per tab in single Studio Dashboard. Is there any way we can configure to auto-rotate to each and every tabs for every 20 secs.
We're trying to suppress the warnings for reports that use dbxlookup command to enrich data in the report.  We have a pretty simple setup with one search head and indexer.  We created a commands.conf... See more...
We're trying to suppress the warnings for reports that use dbxlookup command to enrich data in the report.  We have a pretty simple setup with one search head and indexer.  We created a commands.conf file under the $SPLUNK_HOME/etc/system/local/ folder with the following contents.  There are no commands.conf files anywhere else on the system except under the defaults folders.  After restarting, nothing changed. # Disable dbxlookup security warnings in reports [dbxlookup] is_risky = false   Thinking that perhaps this needed to be added under our app local folder, we moved the file there and restarted. Once done, we encountered java and python errors running any reports with dbxlookups.   What are we missing?  Thanks!  
index =prd-Thailand sourcetype=abc-app-log earliest=-75m@m latest=now |table a, b,c ,d ,e, f |where a=1324 b=345 |stats count as volume Question is how to replace earliest=-1440m@m Please let me... See more...
index =prd-Thailand sourcetype=abc-app-log earliest=-75m@m latest=now |table a, b,c ,d ,e, f |where a=1324 b=345 |stats count as volume Question is how to replace earliest=-1440m@m Please let me know if any more details required
How can I automate the process of exporting a Splunk report and uploading it to a OneDrive link? Does anyone have experience or suggestions on how to achieve this?
Hi everyone, I'm new to Splunk Cloud, and trying to implement test runs for post deployment in our CI/CD pipelines. We have many Tests in Synthetics and want to use them after the deployments, so th... See more...
Hi everyone, I'm new to Splunk Cloud, and trying to implement test runs for post deployment in our CI/CD pipelines. We have many Tests in Synthetics and want to use them after the deployments, so that we can understand everything went well. My problem is that, I make an API call to  /tests/api/try_now   from Postman with json body (test) and it works perfectly, but when I make the same call with  CURL  it hangs. I used this documentation :  https://dev.splunk.com/observability/reference/api/synthetics_api_tests/latest#endpoint-createtrynowapitest  I tried many versions of the test json, sometimes it works with only one resource in it, sometimes it works without validation. My request test json is created automatically from an existing test, so I don't want to change it. What can be the problem that it works with Postman but not cURL? Any help is appreciated. Regards, ilker
Hi, Firstly, thank you for the work on this addon and thanks the community that is solving problems helping each other. We have a Splunk Cloud that we want to connect with Jira using this addon. T... See more...
Hi, Firstly, thank you for the work on this addon and thanks the community that is solving problems helping each other. We have a Splunk Cloud that we want to connect with Jira using this addon. The idea we have is to send to Jira all the tickets that will create Splunk and manage them in Jira. When the ticket is closed in Jira, we want to update all the information, comments and updates in the ticket to visualize them in Splunk. Any ideas or URL that would help us configuring this function? Maybe with webhook? Thank you so much, Kindest regards. P.S: Sorry about my english, it is not the best
I have a need to share high level metrics (via tstats) from a couple of indexes that a few of my teammates do not have access to. I have a scheduled report, let's call it ScheduledReportA, that is ru... See more...
I have a need to share high level metrics (via tstats) from a couple of indexes that a few of my teammates do not have access to. I have a scheduled report, let's call it ScheduledReportA, that is running that tstats command once a day in the morning. I was planning to use the loadjob command to load the results of that report into a dashboard that my teammates can then filter on and search to get the information they need but I've noticed that the loadjob command only works some of the time for me, and otherwise will return 0 results. I know it is not my search syntax as I have used the same search and sometimes gotten results, sometimes not. Syntax for reference: | loadjob savedsearch="kaeleyt:my_app_name:ScheduledReportA" Some additional information to help rule things out: The loadjob command search is being run in the same app that ScheduledReportA lives in The report always has thousands of results, and yes I've checked this ScheduledReportA is shared with the app and its users dispatch.ttl is set to 2p (which I have always understood to be twice the schedule, which in this case is 24h, so 48h ttl) I don't suspect it to be a permissions issue, or a job expiration issue based on the above but I'm wondering if I'm missing something or if anyone has run into similar issues.
My End Goal: I would like to be able to leverage our windows Splunk deployment server/Splunk enterprise server to receive logs from universal forwarders and alert off events from that Splunk instance... See more...
My End Goal: I would like to be able to leverage our windows Splunk deployment server/Splunk enterprise server to receive logs from universal forwarders and alert off events from that Splunk instance then forward the logs to Splunk cloud.  Our current architecture includes Splunk cloud which receives events from an ubuntu forwarder which receives logs from syslog and other universal forwarders installed on windows machines across the network.  Deployment server I believe this also forwards logs to Splunk cloud. There were some apps that required installation on a Splunk enterprise instance and we are receiving that data to cloud and the host field has the deployment server name as host. So I think some of those event are forwarded from the deployment server. I don't think those flow through the ubuntu server I am not exactly sure where to start on trying to figure this out. I have leveraged Splunk documentation for building source inputs and really thrived off of that but I have been hammering at this making changes to outputs.conf and had no success.    It does not appear that any events are being index on the Splunk Enterprise/Deployment Server instance.   Thank you for you help in advanced.
I feed data to Splunk using the HTTP Event Collector, sample event: { "event":{ "event_id": "58512040", "event_name": "Access Granted", ... "event_local_time_with_offset":"2025-07-09T14:46:28+0... See more...
I feed data to Splunk using the HTTP Event Collector, sample event: { "event":{ "event_id": "58512040", "event_name": "Access Granted", ... "event_local_time_with_offset":"2025-07-09T14:46:28+00:00", }, "sourcetype": "BBL_splunk_pacs" }     I set up datasource type BBL_splunk_pacs (see screenshot below) When I search for the events, I get: I see 2 issues: _time is not parsed correctly from the event_local_time_with_offset.  Most of the time, randomly (?), we get all event fields duplicated, and sometimes they are not duplicated.   Any idea what I may be doing wrong?  Thank you.      
I am trying to extract multiple metrics at once using a Signalflow query, but I am not sure if this is supported or just not undocumented.  One metric works fine: | sim flow query=" data('k8s.hpa... See more...
I am trying to extract multiple metrics at once using a Signalflow query, but I am not sure if this is supported or just not undocumented.  One metric works fine: | sim flow query=" data('k8s.hpa.current_replicas', filter............" Wildcard matching metrics works fine too: | sim flow query=" data('k8s.hpa*', filter............" But I have not been able to extract multiple named metrics (not wildcarded). Something like this (not working!!!): | sim flow query=" data('k8s.hpa.current_replicas k8s.hpa.max_replicas', filter............"   Any ideas on how to get this to work?
Hi, I'm trying to transfer this app from Splunk enterprise to Splunk Cloud.  IS there a way to that?