All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

After  Splunk forwarder version got upgrade from 9.0.5.0 to 9.3.1.0 windows server are having issue in forwarding the data to Splunk. Splunkd is stopping often in different server after restarting... See more...
After  Splunk forwarder version got upgrade from 9.0.5.0 to 9.3.1.0 windows server are having issue in forwarding the data to Splunk. Splunkd is stopping often in different server after restarting splund it start forwarding the data but issue comes again after 2,3 days  what actions to be taken to make the logs flow easily to Splunk  
Hi All, Our current setup involves Splunk Search Heads hosted in Splunk Cloud and managed by Support. The existing Deployment Master server is hosted on Azure, where it has been operating smoothly, ... See more...
Hi All, Our current setup involves Splunk Search Heads hosted in Splunk Cloud and managed by Support. The existing Deployment Master server is hosted on Azure, where it has been operating smoothly, supporting around 900+ clients that send logs to Splunk through it. Now, we’re planning to migrate the Deployment Master from Azure to an on-premises Nutanix environment. We’ve built a new server on-premises with the necessary hardware specifications and are preparing to install the latest Splunk Enterprise package (version 9.3.1) downloaded from the Splunk website. We’ll place this package in the `/tmp` directory on the new server, extract it in the `/opt` directory, accept the license agreement, and start Splunk services. Once up, we’ll access the GUI to import the Enterprise licenses. Next, I’ll download the Splunk Universal Forwarder Credential package (Splunkclouduf app) from the Splunk Cloud Search Head. Could you confirm whether this downloaded app should be placed in the `/opt/splunk/etc/apps`, `/opt/splunk/etc/deployment-apps`, or `/tmp` directory on the new server? From there, we can proceed with the installation. Please confirm. Once installed, the Splunkclouduf app will create a `100_splunkcloud` folder in the `/opt/splunk/etc/apps` directory. Should I then copy the `100_splunkcloud` folder to the `/opt/splunk/etc/deployment-apps` directory? Also can we rename the folder name from "100_splunkcloud" to some custom name  Additionally, the next step will involve transferring all deployment apps from the `deployment-apps` directory on the old server (`/opt/splunk/etc/deployment-apps`) to the new server in the same location—please confirm if this is correct. Finally: - Update the `deploymentclient` app on both the old and new Deployment Master servers with the new server name. - Reload the server class on the old Deployment Master server. - Verify that all clients are reporting to the new Deployment Master server.   Want to get it clarified whether these steps are correct or if i missed out anything kindly let me know. So that my new DM server should be running fine post migration.
Can someone suggest if we can configure Cluster Master to work as License Master also ?   I tried to configure but it's throwing error   reason='Unable to connect to license manager=https://xx.xx... See more...
Can someone suggest if we can configure Cluster Master to work as License Master also ?   I tried to configure but it's throwing error   reason='Unable to connect to license manager=https://xx.xx.xx.xx:8089 Read Timeout'
  We have plan to migrate the old physical server to new physical server and the server is a Search Head component in Splunk Environment. for the new physical server we will be receiving new IP add... See more...
  We have plan to migrate the old physical server to new physical server and the server is a Search Head component in Splunk Environment. for the new physical server we will be receiving new IP address, my query is how to configure new IP to the existing Splunk Server Environment Our Splunk Environment has 1 - Cluster master 4 - indexer 1 - deployment server 1- Search Head 1- monitoring console 1- License Master DR Servers 1 - Search Head 1- Indexer
I have a custom command that I call that populates a lookup but when I run the command, it only runs the script 5-20 times (it changes every time) while getting 20,000+ results. I'm wanting to run a ... See more...
I have a custom command that I call that populates a lookup but when I run the command, it only runs the script 5-20 times (it changes every time) while getting 20,000+ results. I'm wanting to run a query that sends the information into a custom script, to then populate a lookup, almost as if it's recursive. I'm thinking this is a performance issue of the script (it is a Python script so it's not the fastest). This is an example command of what it looks like:  index="*" host="example.org" | map search="| customcommand \"$src$\""
I have syslogs coming into Splunk that need some cleaning up - it's essentially JSON with a few extra characters here and there (but enough to be improperly formatted). I'd really like to be able to ... See more...
I have syslogs coming into Splunk that need some cleaning up - it's essentially JSON with a few extra characters here and there (but enough to be improperly formatted). I'd really like to be able to use KV_MODE = json to auto extract fields, but those additional characters prevent this from happening. So I wrote a few SEDCMDs to remove those additional characters and applied the following stanzas to a new sourcetype: However, in our distributed Splunk Cloud environment, these SEDCMDs are not working. There are no errors in the _internal index pertaining to this sourcetype, and I can tell the sourcetype is applying because any key/value pairs in the data that pop up before the extra characters are automatically extracted at search-time as expected (so at least I know the KV_MODE stanza is trying to work). Because the SEDCMDs are not removing the extra characters, the other fields are not being auto-extracted. In my all-in-one test environment, the SEDCMDs work perfectly alongside KV_MODE to clean up the data and pull out the fields. I can't quite determine why it isn't working in Cloud - the syslog servers forwarding this data have Universal Forwarders so I understand why the sourcetype isn't applying at that level... but this sourcetype should be hitting the indexers and applied there, no? What am I missing?   
Please can anyone what are steps to migrate the old data to new server while upgrading the splunk to 9.3 version i have checked the splunk document but i did not understand properly.Kindly please cou... See more...
Please can anyone what are steps to migrate the old data to new server while upgrading the splunk to 9.3 version i have checked the splunk document but i did not understand properly.Kindly please could help anyone help on this. Prasent splunk vesrion is 8.2.0  
I am getting following error while configuring LDAP on my Splunk instances ( tried it on Splunk deployment server, Indexers, HF's), I get the same error everywhere.  Can someone help me understand wh... See more...
I am getting following error while configuring LDAP on my Splunk instances ( tried it on Splunk deployment server, Indexers, HF's), I get the same error everywhere.  Can someone help me understand whats going wrong ?   "Encountered the following error while trying to save: Splunkd daemon is not responding: ('Error connecting to /servicesNS/admin/config_explorer/authentication/providers/LDAP: The read operation timed out',)"   I tried increasing these attributes in authentication.conf but still no luck. network_timeout = 1200 sizelimit = 10000 timelimit = 1500 web.conf - [settings] enableSplunkWebSSL = true splunkdConnectionTimeout = 1201  
In Splunk Dashboard: Total request number for security token/priority token filtered by partner name Duplicate request number filtered by partner name and customer ID (to check if current expirati... See more...
In Splunk Dashboard: Total request number for security token/priority token filtered by partner name Duplicate request number filtered by partner name and customer ID (to check if current expiration time for both tokens are appropriate) Priority token usage filtered by partner name Response time analysis for security token/priority token   How to createádd panel for this 4 options
Hello Everyone, I have below splunk query which will display the output as below   (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="Paym... See more...
Hello Everyone, I have below splunk query which will display the output as below   (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler" OR logger=PaymentStatusClientImpl") | search "* Did not observe any item or terminal signal within*" | spath "paymentStatusResponse.orderCode" | eval clusters=coalesce(openshift_cluster, kubernetes_cluster) | stats values(clusters) as cluster, values(host) as hostname, count(host) as count, values(correlation-id{}) as corr_id, values(paymentStatusResponse.orderCode) as order_code    From the above query, we have 2 loggers.  In the PaymentErrorHandler logger, I get the message containing: "Did not observe any item or terminal signal within" In the EmsPaymentStatusClientImpl logger, I get the json response object containing "paymentStatusResponse.orderCode" value In both loggers, we have correlation-id{} as common element. I want to output a table containing cluster, hostname, count, corr_id and order_code but the order code is alway empty. Please help  
I have this query is not mapped to ink name | rex "(?<time>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}).*Ink Type '(?<ink_type>[^']+)'" | sort - time | table time ink_type that will have this resul... See more...
I have this query is not mapped to ink name | rex "(?<time>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}).*Ink Type '(?<ink_type>[^']+)'" | sort - time | table time ink_type that will have this result I want the result to be just the latest log date. In this case it will only show the top 3. And when new logs comes in, then it will show that new logs only
Hello everyone could please help me to edit this app for FMC logs 
Hi, We recently upgraded the Heavy Forwarders (HF) of our Splunk Enterprise. After the upgrade the Universal Forwarders stopped sending data (e.g. Linux logs) to HFs over Http, the logs are not sear... See more...
Hi, We recently upgraded the Heavy Forwarders (HF) of our Splunk Enterprise. After the upgrade the Universal Forwarders stopped sending data (e.g. Linux logs) to HFs over Http, the logs are not searchable on Search head. We upgraded from v9.1.2 to 9.3.0. We also tried 9.3.1 which did not make any difference - logs are not being sent. v9.2.3 works without issues.  I checked the logs on UF on v9.3.x and can see  ERROR S2SOverHttpOutputProcessor [8340 parsing] - HTTP 503 Service Unavailable However I cannot figure out what causes the issue. Telnet from UF to HF works, Telnet form HF to indexers also work. The tokens on the Deployment server and UFs are the same. Please, advise  
Hello, We are in the process of fully migrating our Splunk Enterprise deployment to the Azure Cloud and will no longer be using Splunk Enterprise on-premises. Specifically, I have a question about m... See more...
Hello, We are in the process of fully migrating our Splunk Enterprise deployment to the Azure Cloud and will no longer be using Splunk Enterprise on-premises. Specifically, I have a question about moving the search head and all its associated components to the cloud without causing disruptions. While we found a Work Instruction on the Splunk website, it wasn't clear enough to follow, and we're concerned about minimizing downtime during the migration process. Could anyone provide guidance (step-by-step guidance) or best practices for migrating a Splunk search head and its components to the Azure Cloud, ensuring no service interruptions during the transition? Your help would be greatly appreciated!
Hi All, I'm having trouble getting conditional formatting to work for a column chart in dashboard studio.   I want something pretty simple... I want the column "ImpactLevel" to be colored red if the... See more...
Hi All, I'm having trouble getting conditional formatting to work for a column chart in dashboard studio.   I want something pretty simple... I want the column "ImpactLevel" to be colored red if the value is less than 50, orange if the value is between 50 and 80, and yellow if the value is more than 80.  Impact level is the only series on the y2 axis of the column chart. Here is the json for my chart: "type": "splunk.column",     "options": {         "y": "> primary | frameBySeriesNames('_lower','_predicted','_upper','avg','max','min','volume','ImpactLevel')",         "y2": "> primary | frameBySeriesNames('ImpactLevel')",         "y2AxisMax": 100,         "overlayFields": [             "volume"         ],         "legendDisplay": "bottom",         "seriesColorsByField": {             "ImpactLevel": [                 {                     "value": "#dc4e41",                     "to": 50                 },                 {                     "value": "#f1813f",                     "from": 50,                     "to": 80                 },                 {                     "value": "#f8be44",                     "from": 80                 }             ]         }     },     "dataSources": {         "primary": "ds_9sBnwPWM_ds_stihSmPw"     },     "title": "HP+ Claims E2E",     "showProgressBar": true,     "eventHandlers": [         {             "type": "drilldown.linkToDashboard",             "options": {                 "app": "sre",                 "dashboard": "noc_priority_dashboard_regclaimdrilldown",                 "newTab": true,                 "tokens": [                     {                         "token": "time.latest",                         "value": "$time.latest$"                     },                     {                         "token": "time.earliest",                         "value": "$time.earliest$"                     },                     {                         "token": "span",                         "value": "$span$"                     }                 ]             }         }     ],     "showLastUpdated": false,     "context": {}
We have logs that are written to /var/log /var/log/audit   We need to keep these for 365 days, and want to ensure that we are following best practices, is there a set of configuration settings w... See more...
We have logs that are written to /var/log /var/log/audit   We need to keep these for 365 days, and want to ensure that we are following best practices, is there a set of configuration settings we can follow to ensure we're following best practices? Ultimately, we want to ensure we have log retention, and that /var/log is not a cluttered mess.    Thank you!
My office has deployed around 120 devices that they have now requested splunk be added to. We have been unsuccessful in getting the CLI commands to work for a successful install. The GUI version work... See more...
My office has deployed around 120 devices that they have now requested splunk be added to. We have been unsuccessful in getting the CLI commands to work for a successful install. The GUI version works, but that would mean I have to reach out and touch each machine directly to set that up. Is there a way to program the GUI so that we can remote deploy this?   
I am trying to make a search that will fire only when an admin makes a change to their own account. I want to know if a-johndoe gives multiple permissions to a-johndoe and NOT if a-johndoe gives per... See more...
I am trying to make a search that will fire only when an admin makes a change to their own account. I want to know if a-johndoe gives multiple permissions to a-johndoe and NOT if a-johndoe gives permissions to a-janedoe.  would i use an IF statement for this?   Thank you
Hey,  i am looking for the third-party notice for the Splunk Add-on for Palo Alto Networks 1.0.0. Unfortunately, i cannot find them in the documentation, since the corresponting section in Release n... See more...
Hey,  i am looking for the third-party notice for the Splunk Add-on for Palo Alto Networks 1.0.0. Unfortunately, i cannot find them in the documentation, since the corresponting section in Release notes - Splunk Add-on for Palo Alto Networks is empty. Anyone here to help me out with this and provide the 3rd party notice information? Best regards! Matthias
hello, trying to monitor below path from the host gas UF installed: C:\Program Files (x86)\dir1\log\name_CRT_[DDMMYYYY].log   I have inserted below stanza but I have not received any logs  ... See more...
hello, trying to monitor below path from the host gas UF installed: C:\Program Files (x86)\dir1\log\name_CRT_[DDMMYYYY].log   I have inserted below stanza but I have not received any logs  [monitor://C:\Program Files (x86)\dir1\log\name_CRT_[DDMMYYYY].log sourcetype = mylog:auditlog disabled = 0 index=test   any help please