All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a Distributed Splunk environment and My Dashboard is linked to Services, and I made few changes in KPI Base Search and KPI Title, but it is not reflecting in the Dashboard, please suggest what... See more...
I have a Distributed Splunk environment and My Dashboard is linked to Services, and I made few changes in KPI Base Search and KPI Title, but it is not reflecting in the Dashboard, please suggest what need to be done.
I noticed that the dashboards in Splunk 9.1.0 open in new tab instead of the same tab. This wasn't the case in the previous versions of Splunk. Anyone knows why this change is added and how to make t... See more...
I noticed that the dashboards in Splunk 9.1.0 open in new tab instead of the same tab. This wasn't the case in the previous versions of Splunk. Anyone knows why this change is added and how to make the dashboards open in same tab using conf file changes? Any help is much appreciated. Thanks
Is there any way to disable the dashboard studio and classic dashboard help cards under dashboards tab through conf file changes?
Please help comment on below issue  Bug description: Option limit is not processed correctly for phantom.collect2 in phantom version 6.1.0 Reproduce in lab: testb = phantom.collect2(container=con... See more...
Please help comment on below issue  Bug description: Option limit is not processed correctly for phantom.collect2 in phantom version 6.1.0 Reproduce in lab: testb = phantom.collect2(container=container,tags=["test"], datapath=['artifact:*.name'],limit=0) phantom.debug(len(testb))   There are more than 6000 artifacts in test container However, phantom.collect2 can only return 1999 results even though we set limit=0 which means no limit   Nov 09, 11:19:01 : phantom.collect2(): called for datapath['artifact:*.name'], scope: None and filter_artifacts: None Nov 09, 11:19:01 : phantom.get_artifacts() called for label: * Nov 09, 11:19:01 : phantom.collect(): called with datapath: artifact:* / <class 'str'>, limit = 2000, scope=all, filter_artifact_ids=[] and none_if_first=False with trace:False Nov 09, 11:19:01 : phantom.collect(): calling out to collect_from_container Nov 09, 11:19:01 : phantom.collect(): called with datapath 'artifact:*', scope='all' and limit=2000. Found 2000 TOTAL artifacts Nov 09, 11:19:01 : phantom.collect2(): Classified datapaths as [<DatapathClassification.ARTIFACT: 1>] Nov 09, 11:19:01 : phantom.collect(): called with datapath as LIST of paths, scope='all' and limit=0. Found 1999 TOTAL artifacts Nov 09, 11:19:01 : 1999            
After upgrading a distributed Splunk Enterprise environment from 9.0.5 to 9.1.1 a lot of issues observed. The most pressing one was the unexpected wiping of all input.conf and output.conf files from ... See more...
After upgrading a distributed Splunk Enterprise environment from 9.0.5 to 9.1.1 a lot of issues observed. The most pressing one was the unexpected wiping of all input.conf and output.conf files from heavy forwarders. All configuration files are still present and intact on the deployment server, though after unpacking the updated version and bringing Splunk back up on the heavy forwarders, all input/output files were wiped from all apps and are not being fetched from the deployment server. So none of them were listening for incoming traffic or forwarding to indexers. Based on previous experience, there is no way to "force push" configuration from the deployment server when all instances are "happy", which means manual inspection and repair of all affected apps. So now I am curious as to why this happened? If there was something wrong with the configuration I'd expect there to be some errors thrown and not just having the entire files deleted. Any input regarding why this happend, how to find out would be appreciated. UPDATE: So by now it is very clear what happened, a bunch of default folder were simply deleted afterduring the update, there are a few indications of this in different log files. 11-08-2023 12:21:19.816 +0100 INFO AuditLogger - Audit:[timestamp=11-08-2023 12:21:19.816, user=n/a, action=delete-parent,path="/opt/splunk/etc/apps/<appname>/default/inputs.conf" This was unfortunate as the deploymentclient.conf file was stored in <appname>/default and got erased together with almost all input/output.conf and a bunch of other things stored in the default folder. I don't get the impression that this is expected behaviour, so now I am curious regarding the cause of this highly strange outcome.
I am trying to write a rex command that extracts the field "registrar" from the below four event examples. The below values in bold are what i am looking for to be the value for "registrar".  I am us... See more...
I am trying to write a rex command that extracts the field "registrar" from the below four event examples. The below values in bold are what i am looking for to be the value for "registrar".  I am using the following regex to extract the field and values, but i seem to be capturing the \r\n after the bold values as well.  How can i modify my regex to capture just the company name in bold leading up to \r\n Registrar IANA Current regex being used:   Registrar:\s(?<registrar>.*?) Registrar IANA   Expiry Date: 2026-12-09T15:18:58Z\r\n Registrar: ABC Holdings, Inc.\r\n Registrar IANA ID: 972 Expiry Date: 2026-12-09T15:18:58Z\r\n Registrar: Gamer.com, LLC\r\n Registrar IANA ID: 837 Expiry Date: 2026-12-09T15:18:59Z\r\n Registrar: NoCo MFR Ltd.\r\n Registrar IANA ID: 756 Expiry Date: 2026-12-09T15:18:59Z\r\n Registrar: Onetrust Group, INC\r\n Registrar IANA ID: 478
Hi Folks,   I'm looking for a document that will help me understand my options for ensuring the integrity of data inbound to splunk from monitored devices, and any security options I may have there... See more...
Hi Folks,   I'm looking for a document that will help me understand my options for ensuring the integrity of data inbound to splunk from monitored devices, and any security options I may have there.  I know TLS is an option for inter-splunk traffic.  Unfortunately, I'm not having any luck with finding options to ensure the integrity and security of data when it's first received into splunk. Surely there's a way for me to secure that, what am I missing here?  
I am trying to write a regex to extract a field called "registrar" from some data like i have below. Can you please help how i could write this regex to be used in a rex command to extract the field?... See more...
I am trying to write a regex to extract a field called "registrar" from some data like i have below. Can you please help how i could write this regex to be used in a rex command to extract the field? Below are three example events: Registry Date: 2025-10-08T15:18:58Z   Registrar: ABC Holdings, Inc.   Registrar ID: 291  Server Name: AD12 Registry Date: 2025-11-08T15:11:58Z   Registrar: OneTeam, Inc.   Registrar ID: 235  Server Name: AD17 Registry Date: 2025-12-08T15:10:58Z   Registrar: appit.com, LLC   Registrar ID: 257  Server Name: AD14   I need the regex to use to extract the field called "registrar"  which in the above example would have the following three value matches:   ABC Holdings, Inc.  OneTeam, Inc appit.com, LLC    
Hello,  Currently, I am using the append command to combine two queries and tabulate the results, but I see only 4999 transactions. Is there any way I can get full results?  Thanks in advance!
Does anyone know a pattern for detecting half-duplex connections from server/laptop sources to server destinations? not switches, not routers. I am Splunk Cloud Version: 9.0.2305.101
Hi,  My main goal is to find user id. Index=A sourcetype=signlogs outcome=failure The above search has a field name called processId but it doesn't have the userId which I needed. Index=A sourcet... See more...
Hi,  My main goal is to find user id. Index=A sourcetype=signlogs outcome=failure The above search has a field name called processId but it doesn't have the userId which I needed. Index=A sourcetype=accesslogs -->This search has a SignatureProcessId( which is same as processId in the search1) and also it has userId. So I need to join these 2 query with common field as processId/SignatureProcessId I tried the below query but it results 0 events: Index=A sourcetype=signlogs outcome=failure  | dedup processId | rename processId as SignatureProcessId | join type=inner SignatureProcessId [Index=A sourcetype=accesslogs | dedup SignatureProcessId ]  | Table _time, SignatureProcessId, userId   Someone please help with fixing this query.
Hello! I have just created a trial account to try Open Telemetry integration. When I go to the OTel tab to generate a key and press  nothing happens, the access key does not appear, but the bu... See more...
Hello! I have just created a trial account to try Open Telemetry integration. When I go to the OTel tab to generate a key and press  nothing happens, the access key does not appear, but the button is active again.  So does the trial account is not enable for OTel integration? thanks!
Hi Team, I have set alert for below query: index= "abc" "ebnc event did not balanced for filename" sourcetype=600000304_gg_abs_dev source!="/var/log/messages" | rex "-\s+(?<Exception>.*)" | table E... See more...
Hi Team, I have set alert for below query: index= "abc" "ebnc event did not balanced for filename" sourcetype=600000304_gg_abs_dev source!="/var/log/messages" | rex "-\s+(?<Exception>.*)" | table Exception source host sourcetype _time And I got below result:   I have set the alert as below   And I have set the incident for it with SAHARA Forwarder but I am getting only 1 incident though the statistics was 6. 6 incidents should get created And also Incidents are coming very late if  event triggered at 8:20 incident is coming on 9:16 Can someone guide me on it.      
I have a SH cluster with 3 servers, but I'm getting a lot of replication errors because the datamodels fill up the dispatch directory. How are jobs released from dispatch? Are files cleaned automati... See more...
I have a SH cluster with 3 servers, but I'm getting a lot of replication errors because the datamodels fill up the dispatch directory. How are jobs released from dispatch? Are files cleaned automatically? There are many bad alloc errors. Thanks.  
I'm trying to troubleshoot some Windows Event Log events coming into Splunk. The events are stream processed, and come in as JSON. Here is a sample (obfuscated). {"Version":"0","Level":"0","Task":"... See more...
I'm trying to troubleshoot some Windows Event Log events coming into Splunk. The events are stream processed, and come in as JSON. Here is a sample (obfuscated). {"Version":"0","Level":"0","Task":"12345","Opcode":"0","Keywords":"0x8020000000000000","Correlation_ActivityID":"{99999999-9999-9999-9999-999999999999}","Channel":"Security","Guid":"99999999-9999-9999-9999-999999999999","Name":"Microsoft-Windows-Security-Auditing","ProcessID":"123","ThreadID":"12345","RecordID":"999999","TargetUserSid":"AD\\user","TargetLogonId":"0xXXXXXXXXX"} There are a number of indexed fields as well, including "Computer" and "EventID". What's interesting - signature_id seems to be created, but when I search on it, it fails. In this event, signature_id is shown under "Interesting Fields" with the value 4647, but if I put signature_id=4647 in the search line, it comes back with no results. If I put EventID=4647, it comes back with the result. I'm using Smart Mode. This led me to digging into the Fields configurations (alias', calculations, etc.) but I couldn't figure out how signature_id was created in the Windows TA. Can anyone provide any insight? Thank you! Ed
Good Day Ladies, Gentlemen! It's my first Dashboard Studio experience, and one (1) space boggles me. I have a datasource that works :       "ds_teamList": { "type": "ds.search", "options"... See more...
Good Day Ladies, Gentlemen! It's my first Dashboard Studio experience, and one (1) space boggles me. I have a datasource that works :       "ds_teamList": { "type": "ds.search", "options": { "query": "host=\"splunk.cxm\" index=\"jira\" sourcetype=\"csv\" \"Project name\"=\"_9010 RD\" \n| rename \"Custom field __Team\" as TEAM\n| table TEAM\n| dedup TEAM \n| sort TEAM" }, "name": "teamList" }       A multiselect input that list the correct data: With one (1) team name containing spaces.       "input_TEAM": { "options": { "items": [{}], "token": "ms_Team", "clearDefaultOnSelection": true, "selectFirstSearchResult": true }, "title": "TEAM", "type": "input.multiselect", "dataSources": { "primary": "ds_teamList" }       A chain search that uses the ms_team token:       | search TEAM IN ($ms_Team$) | search CLIENT IN ($dd_Client$) | search Priority IN ($ms_priority$) | chart count over Status by TEAM       The result gets all good data, but for the team that have a space in its name:   I know that if I could add double quotes for the team with space, it would work, but cannot find a solution for this simple minus issue. Or this is a bug, or not the way I'm suppose to use Dashboard Studio.       | search TEAM IN (Detector,Electronic,Mechanical,Software,"Thin film")       I searched and tried many solutions about strings in token, search... then I'm here for the first time...  Any simple solution possible? Thank you! Sylvain
Hi All,   My requirement is source data records data need to be encrypted. What does process need to follow? Is there any possibly  props.conf ?   Please help me the process.   Regards, Vij 
I have a field called environment which has values like dev,prod,uat,sit. Now I want to create a new_field which all the field values of environment field. Example: (4 field values) environment ... See more...
I have a field called environment which has values like dev,prod,uat,sit. Now I want to create a new_field which all the field values of environment field. Example: (4 field values) environment  dev prod uat sit After query: ( 1 field value, separated by any string) merge_environment= dev | prod | uat | sit How to achieve this?
Hi,  I am trying to upload the dSYM files automatically in the pipeline by hitting the Appdynamics REST APIs. Would like to know, how can I do it using API tokens?  1. I want to generate the token ... See more...
Hi,  I am trying to upload the dSYM files automatically in the pipeline by hitting the Appdynamics REST APIs. Would like to know, how can I do it using API tokens?  1. I want to generate the token using Appdynamics REST API.   The token generation API requires both an authentication header with username and password as well as the oAuth request body to successfully request a token. We use only SAML login. Do I need to create a local account for this purpose? Then, how long the API token can live? 2. API Clients (appdynamics.com) When I generate the token via Admin UI, it shows the max is 30days. Then it needs to be regenerated.  Any comments on it? Appreciate your inputs on this.  Thanks,  Viji