All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I took over an established Splunk ecosystem when the main support admin retired.  I noticed that not all of our stand alone Search Heads, and both Deployment Servers are setup to forward to all 12 of... See more...
I took over an established Splunk ecosystem when the main support admin retired.  I noticed that not all of our stand alone Search Heads, and both Deployment Servers are setup to forward to all 12 of our Indexers in a single multi site Indexer Cluster (see table below) Some Search Heads in their outputs.conf only list the six Indexers that are assigned to Site 1, while the other Search Heads in their outputs.conf only list the six Indexers assigned to Site 2 However all Search Heads in their server.conf file have this stanza: [clustering] multisite = true So the question is should all of our Splunk Instances aka Search Heads, Cluster Master, and Deployment Servers have all 12 Indexers defined in their outputs.conf ?    Site 1 Site 2 Indexer01 Indexer07 Indexer02 Indexer08 Indexer03 Indexer09 Indexer04 Indexer10 Indexer05 Indexer11 Indexer06 Indexer12    
Hi team, I want to compare two results every week and display the differences from one index. And I want create Jira ticket if the results are different. Thanks
Recently, our on-prem deployment has been crashing, as our server's memory limit is being reached. After looking into this, I noticed that python2 and python3 are each consuming 10+ GB of RAM. Once ... See more...
Recently, our on-prem deployment has been crashing, as our server's memory limit is being reached. After looking into this, I noticed that python2 and python3 are each consuming 10+ GB of RAM. Once Splunk DB Connect is disabled, the memory is released from those two python/python3 processes. Has anyone experienced something like this before and, if so, is there anything I can do to troubleshoot which DB inputs might be causing the problem?
HI! My Dashboard studio dateime looks strange T. T  [Dashboard Studio View ↓ ] name datetime count tom 2022-12-01T09:00:00:00+09:00 10 jenny 2022-12-01T09:00:00:00+09:... See more...
HI! My Dashboard studio dateime looks strange T. T  [Dashboard Studio View ↓ ] name datetime count tom 2022-12-01T09:00:00:00+09:00 10 jenny 2022-12-01T09:00:00:00+09:00 15   The time comes out weird like the table above, so if you look at the detailed view, it looks like the table below. [Dashboard detail View ↓ ] name datetime count tom 2022-12-01 10 jenny 2022-12-01 15   How do I get the datetime to look good in the Dashboard studio? help me.. T. T
We are using Splunk Clockify add-on 2.0.1 in a standalone instance and enabled inputs "user" with an API key. After enabling the inputs we could see only 51 users in the search result. The actual cou... See more...
We are using Splunk Clockify add-on 2.0.1 in a standalone instance and enabled inputs "user" with an API key. After enabling the inputs we could see only 51 users in the search result. The actual count of the user is 70. - I verified the ta_clockify_add_on_for_splunk_ClockifyUsers.log file and don't see any ERROR/WARN/FAILED messages. - Is there any restriction to fetching all the user data from the Clockify add-on? - Do we need to add/update any settings in the input stanza? - The log collection interval I have set is 60 sec. Can someone please help me with this?  
Sample logs: quotation-events~~IM~. ABC~CA~Wed Jan 02 23:24:56 EST   2023~A~0.12~0...~2345.78~SM~quotation-events D0C5A044~~AB~DFR~Mon Jan 01 12:52:14 EST   2022~B~107.45~106.90~123.09~T~2345A1 ... See more...
Sample logs: quotation-events~~IM~. ABC~CA~Wed Jan 02 23:24:56 EST   2023~A~0.12~0...~2345.78~SM~quotation-events D0C5A044~~AB~DFR~Mon Jan 01 12:52:14 EST   2022~B~107.45~106.90~123.09~T~2345A1 quotation-events~~IS~;S. ABC~CA~Tue Jan 02 23:24:56 EST   2023~A~0.12~0...~2345.78~SM~quotation-events V0C5A044~~AB~DFR~Mon Jan 01 12:52:14 EST   2022~B~107.45~106.90~123.09~T~2345A1 quotation-events~~IM~. ADC~BA~Sat Jan 01 13:24:56 EST   2023~A~0.12~0...~2345.78~SM~quotation-events B0C5A044~~AB~DFR~Mon Jan 01 12:52:14 EST   2022~B~107.45~106.90~123.09~T~2345A1 quotation-events~~IM~. CCC~HA~Sun Jan 01 20:24:56 EST   2023~A~0.12~0...~2345.78~SM~quotation-events G0C5A044~~AB~DFR~Mon Jan 01 12:52:14 EST   2022~B~107.45~106.90~123.09~T~2345A1 Output in splunk: All evets are coming as a single event and not coming completely. D0C5A044~~AB~DFR~Mon Jan 01 12:52:14 EST   2022~B~107.45~106.90~123.09~T~2345A1 IS~;S. ABC~CA~Tue Jan 02 23:24:56 EST   2023~A~0.12~0...~2345.78~SM~quotation-events V0C5A044~~AB~DFR~Mon Jan 01 12:52:14 EST   2022~B~107.45~106.90~123.09~T~2345A1| quotation-events~~IM~. ADC~BA~Sat Jan 01 13:24:56 EST   2023~A~0.12~0...~2345.78~SM~quotation-events B0C5A044~~AB~DFR~Mon Jan 01 12:52:14 EST   2022~B~107.45~106.90~123.09~T~2345A1 quotation-events~~IM~. CCC~HA~Sun Jan 01 20:24:56 EST   2023~A~0.12~0...~2345.78~SM~quotation-events quotation-events~~ quotation-events~~ G0C5A044~~AB~DFR~Mon Jan 01 12:52:14 EST   2022~B~107.45~106.90~123.09~T~2345A1 props.conf [app:logs:sourcetype] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n)]+w{8}~~|quotation-events~~ NO_BINARY_CHECK=true CHARSET=UTF-8 MAX_TIMESTAMP_LOOKAHEAD=75 disabled=false TIME_FORMAT=%a %b %d %H:%M:%S %Z TIME_PREFIX=(?:[^~]+~)~(?:[^~]+~){3} TRUNCATE=99999 ANNOTATE_PUNCT=false
Need help with Regex field ------------------------feildvalue servername ---------- xtestf100s log_level--------------INFO OR error or warning message ------------ anything from gofer till end ... See more...
Need help with Regex field ------------------------feildvalue servername ---------- xtestf100s log_level--------------INFO OR error or warning message ------------ anything from gofer till end Jan 3 03:50:38 xtestf100s goferd: [INFO][worker-0] gofer.messaging.adapter.connect:28 - connecting: proton+amqps://xtest123s.pharma.aventis.com:5647 Jan 3 03:50:38 xtestf100s goferd: [INFO][worker-0] gofer.messaging.adapter.proton.connection:87 - open: URL: amqps://xtest123s.pharma.aventis.com:5647|SSL: ca: /etc/rhsm/ca/katello-default-ca.pem|key: None|certificate: /etc/pki/consumer/bundle.pem|host-validation: None Jan 3 03:50:38 xtestf100s goferd: [ERROR][worker-0] gofer.messaging.adapter.connect:33 - connect: proton+amqps://xtest123s.pharma.aventis.com:5647, failed: Connection amqps://xtest123s.pharma.aventis.com:5647 disconnected: Condition('proton.pythonio', 'Connection refused to all addresses') Jan 3 03:50:38 xtestf100s goferd: [INFO][worker-0] gofer.messaging.adapter.connect:35 - retry in 106 seconds Jan 3 03:50:54 xtestf100s kernel: type=1400 audit(1672714254.276:566412): avc: denied { read } for pid=75981 comm="ip" name="libCvDllFilter.so" dev="dm-13" ino=393745 scontext=system_u:system_r:ifconfig_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 Jan 3 03:50:54 xtestf100s kernel: type=1400 audit(1672714254.276:566413): avc: denied { open } for pid=75981 comm="ip" path="/opt/commvault/Base64/libCvDllFilter.so" dev="dm-13" ino=393745 scontext=system_u:system_r:ifconfig_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 Jan 3 03:50:54 xtestf100s kernel: type=1400 audit(1672714254.276:566414): avc: denied { getattr } for pid=75981 comm="ip" path="/opt/commvault/Base64/libCvDllFilter.so" dev="dm-13" ino=393745 scontext=system_u:system_r:ifconfig_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 Jan 3 03:50:54 xtestf100s kernel: type=1400 audit(1672714254.276:566415): avc: denied { execute } for pid=75981 comm="ip" path="/opt/commvault/Base64/libCvDllFilter.so" dev="dm-13" ino=393745 scontext=system_u:system_r:ifconfig_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 Jan 3 03:51:43 xtestf100s kernel: type=1400 audit(1672714303.392:566416): avc: denied { read } for pid=77988 comm="ip" name="Base" dev="dm-13" ino=116 scontext=system_u:system_r:ifconfig_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=lnk_file permissive=1 Jan 3 03:51:43 xtestf100s kernel: type=1400 audit(1672714303.392:566417): avc: denied { read } for pid=77988 comm="ip" name="libCvDllFilter.so" dev="dm-13" ino=393745 scontext=system_u:system_r:ifconfig_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1
Hi, I have a dashboard with a column chart to display time ranges. I have set the drilldown to update the $timePicker.earliest$ and $timePicker.latest$ tokens within the dashboard. This works perfe... See more...
Hi, I have a dashboard with a column chart to display time ranges. I have set the drilldown to update the $timePicker.earliest$ and $timePicker.latest$ tokens within the dashboard. This works perfectly and there are no issues here.  The issue is that the display of the time picker does not change, it always just says "All time" even though the tokens are set to a one-day range.  As I mentioned, the whole dashboard recognizes the one-day range, all I need is the Time Picker display to also show this range. I have tried setting the default time range using the $timePicker.earliest$ and $timePicker.latest$ values, as you can do with a text input, but I have had no luck. This is the code I have tried but didn't work. This is the code for the column chart drilldown, which does work and changes the timePicker tokens appropriately.   This is the default screenshot when the app is loaded   This is a screenshot after a column is clicked in the bar chart. Note that everything changes due to the time picker tokens being changed, but the time picker display is not updated and still says "All Time". Additionally, when I open the events in a search after I have selected a time column, the time picker in the search is displayed correctly. This is how I would expect the time picker in my App/Dashboad to display.   Pleease help! Thanks, John
I'm trying to use the following search to capture information regarding an identification code:   index=calabrio MSG_VOICERECORDING_NOTIFY:SRC_NOTIFY_NO_PACKETS | rex field=_raw "Filename(?<phone... See more...
I'm trying to use the following search to capture information regarding an identification code:   index=calabrio MSG_VOICERECORDING_NOTIFY:SRC_NOTIFY_NO_PACKETS | rex field=_raw "Filename(?<phoneid>)(?=[A-Z][A-Z][A-Z]).*(?=-)" | stats count by phoneid   Here an example of the log entry: 2023-01-04 15:08:09.001175 DEBUG [0xce4] VoiceRecorderUpdateTask.cpp[28] VoiceRecorderUpdateTask::runTask: MSG_VOICERECORDING_NOTIFY:SRC_NOTIFY_NO_PACKETS : Filename(4281-1672873674000-4125-SEP12345678-98962688) I want to capture the information from the 4th stanza.  I'm trying to use lookahead to target the three alpha characters.  This works as expected in regex101.com but Splunk is not producing any results.  I've read in several articles that lookahead doesn't work as you would expect it to but I haven't been able to piece together a search that will work.  Maybe I'm going about this the wrong way.  Any help is appreciated. Thanks, Mike
Search: |tstats count where index=att_acc_app source=applicationissues.log by PREFIX(client_application_name=) _time span=1d |rename client_application_name= as client-application-name |timechart... See more...
Search: |tstats count where index=att_acc_app source=applicationissues.log by PREFIX(client_application_name=) _time span=1d |rename client_application_name= as client-application-name |timechart count by client-application-name span=1d When i am using this query i am not getting accurate results.
We have a problem with custom metrics, when we remove custom metrics, always appear when we gonna create a graphic, in the same way some custom metrics don't appear when we create it. when we send t... See more...
We have a problem with custom metrics, when we remove custom metrics, always appear when we gonna create a graphic, in the same way some custom metrics don't appear when we create it. when we send the problem to cisco, Cisco refresh the platform for fix the problem, but don't exist a faster way for this? because wait until the cisco take action is taken weeks thak you
I'm trying to extract logname from the following.  So the logname value would be message.log/bblog.log/api.log Please Note :  When the timestamp date is between10-31 there is no extra space where w... See more...
I'm trying to extract logname from the following.  So the logname value would be message.log/bblog.log/api.log Please Note :  When the timestamp date is between10-31 there is no extra space where when the timestamp date is single digit i.e.,(1-9 ) there is an extra space at the beginning of the event. ex: <10>Jan<space><space>4 15:30:02        <10>Dec<space>31 15:30:02 Here are the sample events  <10>Jan  4 15:30:02 a2222xyabcd031.xyz.com app1001-cc-NONPROD 2023-01-04 15:30:02 message.log INFORMATION apple:73 dev-banana_Guava-[Messaging.Security] [sys] [THE Outbound | outbound|] claimEligibility=false   <10>Jan  4 15:30:02 ia2222xyabcd031.xyz.com app1001-cc-NONPROD 2023-01-04 15:30:02 bblog.log INFORMATION apple:73 dev-banana_Guava-[Messaging.Security] [sys] [THE Outbound | outbound|] claimEligibility=false   <10>Dec 31 15:30:04 a2222xyabcd031.xyz.com app1001-cc-NONPROD 2023-01-04 15:30:04 api.log INFORMATION apple:73 dev-banana_Guava-[Messaging.Security] [sys] [THE Outbound | outbound|] claimEligibility=false
I'm creating a dashboard with custom widget builder, I haven't problem with this, but I notice that every day the graphics for all application that I have, stop graphing to 14:00 hrs until 20:00 hrs ... See more...
I'm creating a dashboard with custom widget builder, I haven't problem with this, but I notice that every day the graphics for all application that I have, stop graphing to 14:00 hrs until 20:00 hrs  in that time between 14:00 and 20:00 hrs every day the graphic don't show nothing about the business transactions, that is not possible because in the default dashboard the transaction is working. This happen with 5 applications that I have configured on appdynamics ¿how can I resolve this?, I need see information all day, all the time. Thank you for your answer
Hi  I need to count how many times a webhook alert action is executed, the idea is can controller if the alert was execute then doing counting, if the counting is major to 5 wont sent the alert again
So, I'm pretty sure that I shouldn't be seeing these errors during an upgrade to 9.0.3. This should probably go into a bug report.   /opt/splunk/bin/splunk btool check --debug 1.  Checking: /opt/... See more...
So, I'm pretty sure that I shouldn't be seeing these errors during an upgrade to 9.0.3. This should probably go into a bug report.   /opt/splunk/bin/splunk btool check --debug 1.  Checking: /opt/splunk/etc/apps/search/local/alert_actions.conf Invalid key in stanza [email] in /opt/splunk/etc/apps/search/local/alert_actions.conf, line 2: show_password (value: True). Did you mean 'sendcsv'? Did you mean 'sendpdf'? Did you mean 'sendresults'? Did you mean 'sslAltNameToCheck'? Did you mean 'sslCommonNameToCheck'? Did you mean 'sslVerifyServerCert'? Did you mean 'sslVerifyServerName'? Did you mean 'sslVersions'? Did you mean 'subject'? Did you mean 'subject.alert'? Did you mean 'subject.report'? 2. Invalid key in stanza [instrumentation.usage.tlsBestPractices] in /opt/splunk/etc/apps/splunk_instrumentation/default/savedsearches.conf, line 451: | append [| rest /services/configs/conf-pythonSslClientConfig | eval sslVerifyServerCert (value: if(isnull(sslVerifyServerCert),"unset",sslVerifyServerCert), splunk_server=sha256(splunk_server) | stats values(eai:acl.app) as python_configuredApp values(sslVerifyServerCert) as python_sslVerifyServerCert by splunk_server | eval python_configuredSystem=if(python_configuredApp="system","true","false") | fields python_sslVerifyServerCert, splunk_server, python_configuredSystem] | append [| rest /services/configs/conf-web/settings | eval mgmtHostPort=if(isnull(mgmtHostPort),"unset",mgmtHostPort), splunk_server=sha256(splunk_server) | stats values(eai:acl.app) as fwdrMgmtHostPort_configuredApp values(mgmtHostPort) as fwdr_mgmtHostPort by splunk_server | eval fwdrMgmtHostPort_configuredSystem=if(fwdrMgmtHostPort_configuredApp="system","true","false") | fields fwdrMgmtHostPort_sslVerifyServerCert, splunk_server, fwdrMgmtHostPort_configuredSystem] | append [| rest /services/configs/conf-server/sslConfig | eval cliVerifyServerName=if(isnull(cliVerifyServerName),"feature",cliVerifyServerName), splunk_server=sha256(splunk_server) | stats values(cliVerifyServerName) as servername_cliVerifyServerName values(eai:acl.app) as servername_configuredApp by splunk_server | eval cli_configuredSystem=if(cli_configuredApp="system","true","false") | fields cli_sslVerifyServerCert, splunk_server, cli_configuredSystem]
Hello,  I am using splunk 9.0.0.1, and running btool to list out my index settings.  The trouble is I only want one stanza, but btool treats the stanza as a wildcard. splunk btool --debug indexes... See more...
Hello,  I am using splunk 9.0.0.1, and running btool to list out my index settings.  The trouble is I only want one stanza, but btool treats the stanza as a wildcard. splunk btool --debug indexes list cisco I get all stanza's with "cisco" in them (there are 51 of them, including "index=cisco").  how do restrict this?  I only want the "cisco" index. --jason
I'm trying to create a table to view hosts in multiple indexes, and report if they are returning data.  For example Host    Index1  Index2   Index3 A           OK            OK B             ... See more...
I'm trying to create a table to view hosts in multiple indexes, and report if they are returning data.  For example Host    Index1  Index2   Index3 A           OK            OK B                              OK            OK C            OK                              OK   I've been using inputlookups to create a static list of hosts to reference, and appendcols to search indexes for the correct information. However, when used together the data isn't quite matching up like it does when I search separately. Any Suggestions?
Hi, Is there any way to execute a linux query and fetch the results of it in the Splunk search board? Following this I have written a condition to send an alert based on the command output.
I have a RHEL5 instance running  Universal Forwarder 7.0.3, currently sending logs to Splunk Enterprise. We are in the process of migration to Splunk Cloud. Splunk Cloud doesn't accept <TLS 1.2 and I... See more...
I have a RHEL5 instance running  Universal Forwarder 7.0.3, currently sending logs to Splunk Enterprise. We are in the process of migration to Splunk Cloud. Splunk Cloud doesn't accept <TLS 1.2 and I can't use HEC from the host because the TLS version is 1.0.  As part of the solution, I came up with using an intermediate forwarder - this can forward the logs however, what I am getting is all hex characters.  Something like this: \x00\x8F\x00\x00\x8Bo\xF5\x86\x84᜝h\xFCt5\xCB4T^\x9B\xBC\xE3c\xE6i\xD3\xA5\xCE/\x00\x00 \xC0,\xC00\xC0+\xC0/\xC0$\xC0(\xC0#\xC0'\x00\x9D\x00\x9C\x00<\xC0.\xC0-\xC0&\xC0%\x00\xFF\x00\x00A\x00 \x00\x00\x00 At some point, I also saw "--splunk-cooked-mode-v3--" in the logs.  The inputs file of the for the intermediate forwarder is this: [splunktcp://<Source IP>:<Port>] index = <my index> disabled = false The output is just the standard HEC.  The version of the universal forwarder that I am using is 9.0.3 The universal forwarder version of the source cannot be updated to the latest one or any more than that since it is RHEL5.  How should I be able to see clean data and not hex ones? 
Hi, I would like to have the initial administrator setup and controller's license URL /NAME/IP Range of the controller user name and Accesskey to the controller to launch the controller before we st... See more...
Hi, I would like to have the initial administrator setup and controller's license URL /NAME/IP Range of the controller user name and Accesskey to the controller to launch the controller before we start to configure the agents with the controller. I have access [Redacted]to this URL but I would like to configure the above-mentioned part to get ready to access it. I have shared my screen, I don't see any Administration in Settings options to setup user and roles  I have scheduled a support 1:1 call but no one has joined. It would be great if I get some guidance here. Regards, Raji ^ Post edited by @Ryan.Paredez to remove Controller URL/name. Please do not share Controller URL/Name on Community posts for security and privacy reasons.