All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We have recently updated our deployment server to version 9.4.1. Whenever page loads the default view has GUID of the clients lacking hostname and IP. Every time you have to click the gear on the ri... See more...
We have recently updated our deployment server to version 9.4.1. Whenever page loads the default view has GUID of the clients lacking hostname and IP. Every time you have to click the gear on the right side to select the extra fields. This is not persistent and you sometimes have to do it again. How do we make it persistent?
There is a process I'm trying to track. It starts by generating a single event. Then asynchronously a second event is created. My problem is that the async process often fails. I would like to find a... See more...
There is a process I'm trying to track. It starts by generating a single event. Then asynchronously a second event is created. My problem is that the async process often fails. I would like to find all occurences of the first event that do not have a corresponding second event. I know how to search for each event independently. They share a couple common identifiers that can be extracted. I have tried a subsearch and a join but have not gotten any results. As a compressed and simplified example, here is my pseudo search index=idx1 ... (identifiers here) | rex "EventId: (?<event_id>\d+)" | join type=left event_id [ search index=idx1 ... (identifiers here) | rex "\"EventId\",\"value\":\"(?<event_id>\d+)" ] Both events occur at about the same time, usually within a second. They share the EventId extracted field which can be considered unique within the time period I'm searching. Limits are not an issue as this process occurs about 100 times a day. So how can I list out the EventIds from the main search that do not have a match in the second search? Thank you experts!
Hi everyone. I have a token called "schedule_dttm" that has two attributes: "earliest" and "latest". By default, "schedule_dttm.latest" is initialized with "now()", but it can hold data in three d... See more...
Hi everyone. I have a token called "schedule_dttm" that has two attributes: "earliest" and "latest". By default, "schedule_dttm.latest" is initialized with "now()", but it can hold data in three different formats: the "now" I just mentioned, a specific epoch timestamp and a relative timestamp such as "-1h@h". My goal is to convert all of them to epoch timestamp, so the second case is trivial for me. But how do I (1) check which format is the date in and (2) create a logic to convert it properly conditionally based on the format its at? Thanks in advance, Pedro
I have created Studio Dashboard in Splunk cloud. I have created multiple panels in each tab, for example 10 panel per tab in single Studio Dashboard. Is there any way we can configure to auto-rotate ... See more...
I have created Studio Dashboard in Splunk cloud. I have created multiple panels in each tab, for example 10 panel per tab in single Studio Dashboard. Is there any way we can configure to auto-rotate to each and every tabs for every 20 secs.
We're trying to suppress the warnings for reports that use dbxlookup command to enrich data in the report.  We have a pretty simple setup with one search head and indexer.  We created a commands.conf... See more...
We're trying to suppress the warnings for reports that use dbxlookup command to enrich data in the report.  We have a pretty simple setup with one search head and indexer.  We created a commands.conf file under the $SPLUNK_HOME/etc/system/local/ folder with the following contents.  There are no commands.conf files anywhere else on the system except under the defaults folders.  After restarting, nothing changed. # Disable dbxlookup security warnings in reports [dbxlookup] is_risky = false   Thinking that perhaps this needed to be added under our app local folder, we moved the file there and restarted. Once done, we encountered java and python errors running any reports with dbxlookups.   What are we missing?  Thanks!  
index =prd-Thailand sourcetype=abc-app-log earliest=-75m@m latest=now |table a, b,c ,d ,e, f |where a=1324 b=345 |stats count as volume Question is how to replace earliest=-1440m@m Please let me... See more...
index =prd-Thailand sourcetype=abc-app-log earliest=-75m@m latest=now |table a, b,c ,d ,e, f |where a=1324 b=345 |stats count as volume Question is how to replace earliest=-1440m@m Please let me know if any more details required
How can I automate the process of exporting a Splunk report and uploading it to a OneDrive link? Does anyone have experience or suggestions on how to achieve this?
Hi everyone, I'm new to Splunk Cloud, and trying to implement test runs for post deployment in our CI/CD pipelines. We have many Tests in Synthetics and want to use them after the deployments, so th... See more...
Hi everyone, I'm new to Splunk Cloud, and trying to implement test runs for post deployment in our CI/CD pipelines. We have many Tests in Synthetics and want to use them after the deployments, so that we can understand everything went well. My problem is that, I make an API call to  /tests/api/try_now   from Postman with json body (test) and it works perfectly, but when I make the same call with  CURL  it hangs. I used this documentation :  https://dev.splunk.com/observability/reference/api/synthetics_api_tests/latest#endpoint-createtrynowapitest  I tried many versions of the test json, sometimes it works with only one resource in it, sometimes it works without validation. My request test json is created automatically from an existing test, so I don't want to change it. What can be the problem that it works with Postman but not cURL? Any help is appreciated. Regards, ilker
Hi, Firstly, thank you for the work on this addon and thanks the community that is solving problems helping each other. We have a Splunk Cloud that we want to connect with Jira using this addon. T... See more...
Hi, Firstly, thank you for the work on this addon and thanks the community that is solving problems helping each other. We have a Splunk Cloud that we want to connect with Jira using this addon. The idea we have is to send to Jira all the tickets that will create Splunk and manage them in Jira. When the ticket is closed in Jira, we want to update all the information, comments and updates in the ticket to visualize them in Splunk. Any ideas or URL that would help us configuring this function? Maybe with webhook? Thank you so much, Kindest regards. P.S: Sorry about my english, it is not the best
I have a need to share high level metrics (via tstats) from a couple of indexes that a few of my teammates do not have access to. I have a scheduled report, let's call it ScheduledReportA, that is ru... See more...
I have a need to share high level metrics (via tstats) from a couple of indexes that a few of my teammates do not have access to. I have a scheduled report, let's call it ScheduledReportA, that is running that tstats command once a day in the morning. I was planning to use the loadjob command to load the results of that report into a dashboard that my teammates can then filter on and search to get the information they need but I've noticed that the loadjob command only works some of the time for me, and otherwise will return 0 results. I know it is not my search syntax as I have used the same search and sometimes gotten results, sometimes not. Syntax for reference: | loadjob savedsearch="kaeleyt:my_app_name:ScheduledReportA" Some additional information to help rule things out: The loadjob command search is being run in the same app that ScheduledReportA lives in The report always has thousands of results, and yes I've checked this ScheduledReportA is shared with the app and its users dispatch.ttl is set to 2p (which I have always understood to be twice the schedule, which in this case is 24h, so 48h ttl) I don't suspect it to be a permissions issue, or a job expiration issue based on the above but I'm wondering if I'm missing something or if anyone has run into similar issues.
My End Goal: I would like to be able to leverage our windows Splunk deployment server/Splunk enterprise server to receive logs from universal forwarders and alert off events from that Splunk instance... See more...
My End Goal: I would like to be able to leverage our windows Splunk deployment server/Splunk enterprise server to receive logs from universal forwarders and alert off events from that Splunk instance then forward the logs to Splunk cloud.  Our current architecture includes Splunk cloud which receives events from an ubuntu forwarder which receives logs from syslog and other universal forwarders installed on windows machines across the network.  Deployment server I believe this also forwards logs to Splunk cloud. There were some apps that required installation on a Splunk enterprise instance and we are receiving that data to cloud and the host field has the deployment server name as host. So I think some of those event are forwarded from the deployment server. I don't think those flow through the ubuntu server I am not exactly sure where to start on trying to figure this out. I have leveraged Splunk documentation for building source inputs and really thrived off of that but I have been hammering at this making changes to outputs.conf and had no success.    It does not appear that any events are being index on the Splunk Enterprise/Deployment Server instance.   Thank you for you help in advanced.
I feed data to Splunk using the HTTP Event Collector, sample event: { "event":{ "event_id": "58512040", "event_name": "Access Granted", ... "event_local_time_with_offset":"2025-07-09T14:46:28+0... See more...
I feed data to Splunk using the HTTP Event Collector, sample event: { "event":{ "event_id": "58512040", "event_name": "Access Granted", ... "event_local_time_with_offset":"2025-07-09T14:46:28+00:00", }, "sourcetype": "BBL_splunk_pacs" }     I set up datasource type BBL_splunk_pacs (see screenshot below) When I search for the events, I get: I see 2 issues: _time is not parsed correctly from the event_local_time_with_offset.  Most of the time, randomly (?), we get all event fields duplicated, and sometimes they are not duplicated.   Any idea what I may be doing wrong?  Thank you.      
I am trying to extract multiple metrics at once using a Signalflow query, but I am not sure if this is supported or just not undocumented.  One metric works fine: | sim flow query=" data('k8s.hpa... See more...
I am trying to extract multiple metrics at once using a Signalflow query, but I am not sure if this is supported or just not undocumented.  One metric works fine: | sim flow query=" data('k8s.hpa.current_replicas', filter............" Wildcard matching metrics works fine too: | sim flow query=" data('k8s.hpa*', filter............" But I have not been able to extract multiple named metrics (not wildcarded). Something like this (not working!!!): | sim flow query=" data('k8s.hpa.current_replicas k8s.hpa.max_replicas', filter............"   Any ideas on how to get this to work?
Hi, I'm trying to transfer this app from Splunk enterprise to Splunk Cloud.  IS there a way to that?   
Hello Guys, Here is the current situation Below is what I'd like to achieve I've tried the following with no success   Can anyone help me achieve my goal ? Thanks in advance
I have an event that looks as follows: { "app_name": "my_app", "audit_details": { "audit": { "responseContentLength": "-1", "name": "app_name", "d... See more...
I have an event that looks as follows: { "app_name": "my_app", "audit_details": { "audit": { "responseContentLength": "-1", "name": "app_name", "details": { "detail": [{ "messageId": "-4", "time": "1752065281146", "ordinal": "0" }, { "messageId": "7103", "time": "1752065281146", "ordinal": "1" }, { "messageId": "7101", "time": "1752065281146", "ordinal": "2" } ] } } } } I want to create a table that includes a row for each detail record that includes the messageId, time and ordinal, but also a messageIdDescription that is retrieved from a lookup similar to as follows: lookup Table_MessageId message_Id as messageId OUTPUT definition as messageIdDescription the Table_MessageId has three columns - message_Id, definition, audit_Level Any pointers are appreciated.
Hi, I tried to use the Next Step of the correlation search: Ping - NSLOOKUP - Risk Analysis I was lucky to find the result of the Risk Analysis in the Risk Analysis dashboard. but when try to us... See more...
Hi, I tried to use the Next Step of the correlation search: Ping - NSLOOKUP - Risk Analysis I was lucky to find the result of the Risk Analysis in the Risk Analysis dashboard. but when try to use ping/nslookup, i have no output?   How can i find the result of the ping command?
Hi, I have a variety of CSV lookup tables and have to add a field to each of these tables. The CSV files are used by scheduled searches, so I need their contents AND the field names. table1.cs... See more...
Hi, I have a variety of CSV lookup tables and have to add a field to each of these tables. The CSV files are used by scheduled searches, so I need their contents AND the field names. table1.csv: index,sourcetype index1,st1 table2.csv: sourcetype,source st1,source1 table3.csv: field1,field2 - no rows For this, I use the following spl: | inputlookup table1.csv | table index,sourcetype,comment1 | outputlookup table1.csv | inputlookup table2.csv | table sourcetype,source,comment2 | outputlookup table2.csv | inputlookup table3.csv | table field1,field2,comment3 | outputlookup table3.csv For table1 and table2, this works. But for table3, I have the problem that outputlookup creates an empty table and the field names are missing. Is there a search that can extend empty and filled lookups?   Thank you.
Hi!  Is it  possible to restore deleted Mobile Apps in User experience monitoring of AppDynamics? 
We are storing data in a Splunk lookup file on one of the forwarders.  In our distributed Splunk architecture, this lookup data is not getting forwarded to the indexers or the search head, and there... See more...
We are storing data in a Splunk lookup file on one of the forwarders.  In our distributed Splunk architecture, this lookup data is not getting forwarded to the indexers or the search head, and therefore it is not available for search or enrichment.  How can we sync or transfer this lookup data from the forwarder to the search head (or indexers) so that it can be used across the distributed environment?