All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I have been working on a ansible playbook to deploy the UF to different servers. So far everything is working fine until the playbook tries to execute the command to start splunk the first t... See more...
Hi All, I have been working on a ansible playbook to deploy the UF to different servers. So far everything is working fine until the playbook tries to execute the command to start splunk the first time Code is as follows - name: Start splunk service become: true become_method: sudo become_user: splunk command: /opt/splunkforwarder/bin/splunk start --accept-license --answer-yes --no-prompt --seed-passwd {{uf_user_password}} register: console Ansible just gets stuck there and task doesn't end, if you check the server you can see that the command executed is the correct one even with the right user but nothing happens If you run the command with the same user on the server we get this Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk /opt/splunkforwarder" This appears to be your first time running this version of Splunk. Creating unit file... Error calling execve(): No such file or directory Error launching command: No such file or directory Failed to create the unit file. Please do it manually later. Splunk> The Notorious B.I.G. D.A.T.A. Checking prerequisites... Checking mgmt port [8089]: open Creating: /opt/splunkforwarder/var/lib/splunk Creating: /opt/splunkforwarder/var/run/splunk Creating: /opt/splunkforwarder/var/run/splunk/appserver/i18n Creating: /opt/splunkforwarder/var/run/splunk/appserver/modules/static/css Creating: /opt/splunkforwarder/var/run/splunk/upload Creating: /opt/splunkforwarder/var/run/splunk/search_telemetry Creating: /opt/splunkforwarder/var/spool/splunk Creating: /opt/splunkforwarder/var/spool/dirmoncache Creating: /opt/splunkforwarder/var/lib/splunk/authDb Creating: /opt/splunkforwarder/var/lib/splunk/hashDb New certs have been generated in '/opt/splunkforwarder/etc/auth'. Checking conf files for problems... Invalid key in stanza [webhook] in /opt/splunkforwarder/etc/system/default/alert_actions.conf, line 229: enable_allowlist (value: false). Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug' Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunkforwarder/splunkforwarder-9.0.4-de405f4a7979-linux-2.6-x86_64-manifest' All installed files intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security Done [ OK ] We have also tried different approaches like using a script file and executing it instead of calling directly the command, but always get the same results.  Any suggestions?   Regards
I am trying to launch a new instance from an image created by an existing EC2 instance that hosts Splunk. When I launch the new one, everything looks fine (Splunk was already installed, files remaine... See more...
I am trying to launch a new instance from an image created by an existing EC2 instance that hosts Splunk. When I launch the new one, everything looks fine (Splunk was already installed, files remained unchanged, etc). However, I was not able to access Splunk app via <ipv4 address>:<port> (we are using 8443 instead but our inbound rule allows 8000, 8443, 8089...)  I checked the inbound rules and it is identical to the old one which have all correct ports setup.    splunkd 26175 was not running. Stopping splunk helpers... [ OK ] Done. Stopped helpers. Removing stale pid file... done. splunkd is not running. [FAILED] Splunk> The Notorious B.I.G. D.A.T.A. Checking prerequisites... Checking http port [8443]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... Validated: _audit _configtracker _internal _introspection _metrics _metrics_rollup _telemetry _thefishbucket boost_prod_connect history main summary Done Checking filesystem compatibility... Done Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunk/splunk-9.0.3-dd0128b1f8cd-linux-2.6-x86_64-manifest' File '/opt/splunk/etc/apps/splunk_instrumentation/default/savedsearches.conf' changed. Problems were found, please review your files and move customizations to local All preliminary checks passed. Starting splunk server daemon (splunkd)... PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security Done [ OK ] Waiting for web server at https://127.0.0.1:8443 to be available...................................splunkd 27894 was not running. Stopping splunk helpers... [ OK ] Done. Stopped helpers. Removing stale pid file... done. WARNING: web interface does not seem to be available!  
Hi I am trying to launch a new instance from an image created by an existing EC2 instance that hosts Splunk. When I launch the new one, everything looks fine (Splunk was already installed, files re... See more...
Hi I am trying to launch a new instance from an image created by an existing EC2 instance that hosts Splunk. When I launch the new one, everything looks fine (Splunk was already installed, files remained unchanged, etc). However, I was not able to access Splunk app via <ipv4 address>:<port> (we are using 8443 instead but our inbound rule allows 8000, 8443, 8089...)  I checked the inbound rules and it is identical to the old one which have all correct ports setup.  When I run `sudo /opt/splunk/bin/splunk restart` Here is what I got       splunkd 26175 was not running. Stopping splunk helpers... [ OK ] Done. Stopped helpers. Removing stale pid file... done. splunkd is not running. [FAILED] Splunk> The Notorious B.I.G. D.A.T.A. Checking prerequisites... Checking http port [8443]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... Validated: _audit _configtracker _internal _introspection _metrics _metrics_rollup _telemetry _thefishbucket boost_prod_connect history main summary Done Checking filesystem compatibility... Done Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunk/splunk-9.0.3-dd0128b1f8cd-linux-2.6-x86_64-manifest' File '/opt/splunk/etc/apps/splunk_instrumentation/default/savedsearches.conf' changed. Problems were found, please review your files and move customizations to local All preliminary checks passed. Starting splunk server daemon (splunkd)... PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security Done [ OK ] Waiting for web server at https://127.0.0.1:8443 to be available...................................splunkd 27894 was not running. Stopping splunk helpers... [ OK ] Done. Stopped helpers. Removing stale pid file... done. WARNING: web interface does not seem to be available!     I also checked the splunkd.log and here is a snapshot of the log   06-07-2023 18:37:29.610 +0000 INFO DatabaseDirectoryManager [28341 indexerPipe] - idx=_audit writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/audit/db' pendingBucketUpdates=1 innerLockTime=0.000. Reason='New hot bucket bid=_audit~47~5C52B298-3A3B-4A82-9F95-B9738E1D9BFB bucket_action=add' 06-07-2023 18:37:29.610 +0000 INFO DatabaseDirectoryManager [28341 indexerPipe] - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/audit/db duration=0.000 06-07-2023 18:37:29.619 +0000 INFO ServerRoles [28341 indexerPipe] - Declared role=indexer. 06-07-2023 18:37:30.122 +0000 WARN IntrospectionGenerator:resource_usage [28362 ExecProcessor] - SSLOptions - server.conf/[sslConfig]/sslVerifyServerCert is false disabling certificate validation; must be set to "true" for increased security 06-07-2023 18:37:30.126 +0000 WARN IntrospectionGenerator:resource_usage [28362 ExecProcessor] - SSLCommon - PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security 06-07-2023 18:37:30.188 +0000 INFO ProcessTracker [27894 MainThread] - (child_0__Fsck) Fsck - (entire bucket) Rebuild for bucket='/opt/splunk/var/lib/splunk/audit/db/db_1686162521_1686162521_46' took 2703.9 milliseconds 06-07-2023 18:37:30.425 +0000 INFO TailingProcessor [28425 MainTailingThread] - TailWatcher initializing... 06-07-2023 18:37:30.425 +0000 INFO TailingProcessor [28425 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME/var/run/splunk/search_telemetry/*search_telemetry.json. 06-07-2023 18:37:30.426 +0000 INFO TailingProcessor [28425 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME/var/spool/splunk. 06-07-2023 18:37:30.426 +0000 INFO TailingProcessor [28425 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME/var/spool/splunk/...stash_hec. 06-07-2023 18:37:30.426 +0000 INFO TailingProcessor [28425 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME/var/spool/splunk/...stash_new. 06-07-2023 18:37:30.427 +0000 INFO TailingProcessor [28425 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME/var/spool/splunk/tracker.log*. 06-07-2023 18:37:30.427 +0000 INFO TailingProcessor [28425 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME/etc/splunk.version. 06-07-2023 18:37:30.427 +0000 INFO TailingProcessor [28425 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME/var/log/introspection. 06-07-2023 18:37:30.427 +0000 INFO TailingProcessor [28425 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME/var/log/python_upgrade_readiness_app. 06-07-2023 18:37:30.427 +0000 INFO TailingProcessor [28425 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME/var/log/splunk. 06-07-2023 18:37:30.427 +0000 INFO TailingProcessor [28425 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME/var/log/splunk/configuration_change.log. 06-07-2023 18:37:30.427 +0000 INFO TailingProcessor [28425 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME/var/log/splunk/license_usage_summary.log. 06-07-2023 18:37:30.427 +0000 INFO TailingProcessor [28425 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME/var/log/splunk/splunk_instrumentation_cloud.log*. 06-07-2023 18:37:30.428 +0000 INFO TailingProcessor [28425 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME/var/log/watchdog/watchdog.log*. 06-07-2023 18:37:30.428 +0000 INFO TailReader [28425 MainTailingThread] - State transitioning from 1 to 0 (initOrResume). 06-07-2023 18:37:30.428 +0000 INFO TailReader [28425 MainTailingThread] - State transitioning from 1 to 0 (initOrResume). 06-07-2023 18:37:30.428 +0000 INFO TailingProcessor [28425 MainTailingThread] - Adding watch on path: /opt/splunk/etc/splunk.version. 06-07-2023 18:37:30.428 +0000 INFO TailingProcessor [28425 MainTailingThread] - Adding watch on path: /opt/splunk/var/log/introspection. 06-07-2023 18:37:30.428 +0000 INFO TailingProcessor [28425 MainTailingThread] - Adding watch on path: /opt/splunk/var/log/python_upgrade_readiness_app. 06-07-2023 18:37:30.428 +0000 INFO TailingProcessor [28425 MainTailingThread] - Adding watch on path: /opt/splunk/var/log/splunk. 06-07-2023 18:37:30.428 +0000 INFO TailingProcessor [28425 MainTailingThread] - Adding watch on path: /opt/splunk/var/log/watchdog. 06-07-2023 18:37:30.428 +0000 INFO TailingProcessor [28425 MainTailingThread] - Adding watch on path: /opt/splunk/var/run/splunk/search_telemetry. 06-07-2023 18:37:30.428 +0000 INFO TailingProcessor [28425 MainTailingThread] - Adding watch on path: /opt/splunk/var/spool/splunk. 06-07-2023 18:37:30.450 +0000 INFO TailReader [28443 tailreader0] - Registering metrics callback for: tailreader0 06-07-2023 18:37:30.450 +0000 INFO TailReader [28443 tailreader0] - Starting tailreader0 thread 06-07-2023 18:37:30.462 +0000 INFO TailReader [28444 batchreader0] - Registering metrics callback for: batchreader0 06-07-2023 18:37:30.462 +0000 INFO TailReader [28444 batchreader0] - Starting batchreader0 thread 06-07-2023 18:37:30.467 +0000 INFO ConfigWatcher [27902 HTTPDispatch] - Loaded configtracker settings with disabled=0 mode=auto log_throttling_disabled=1 log_throttling_threshold_ms=10.000 denylist= exclude_fields= 06-07-2023 18:37:30.529 +0000 WARN IntrospectionGenerator:resource_usage [28362 ExecProcessor] - SSLOptions - server.conf/[kvstore]/sslVerifyServerCert is false disabling certificate validation; must be set to "true" for increased security 06-07-2023 18:37:30.643 +0000 INFO IntrospectionGenerator:resource_usage [28362 ExecProcessor] - RU_main - I-data gathering (Resource Usage) starting; period=10s 06-07-2023 18:37:30.733 +0000 INFO IntrospectionGenerator:resource_usage [28362 ExecProcessor] - RU_main - I-data gathering (IO Statistics) starting; interval=60s 06-07-2023 18:37:30.733 +0000 INFO IntrospectionGenerator:resource_usage [28362 ExecProcessor] - RU_main - Starting I-data gathering (IOWait Statistics). Interval_secs=10 06-07-2023 18:37:31.065 +0000 INFO ConfigWatcher [28445 SplunkConfigChangeWatcherThread] - SplunkConfigChangeWatcher initializing... 06-07-2023 18:37:31.065 +0000 INFO ConfigWatcher [28445 SplunkConfigChangeWatcherThread] - Kernel File Notification is enabled on this instance. inotify will be used for configuration tracking. 06-07-2023 18:37:31.067 +0000 INFO ConfigWatcher [28445 SplunkConfigChangeWatcherThread] - Watching path: /opt/splunk/etc/system/local, /opt/splunk/etc/system/default, /opt/splunk/etc/apps, /opt/splunk/etc/users, /opt/splunk/etc/peer-apps, /opt/splunk/etc/instance.cfg 06-07-2023 18:37:31.195 +0000 INFO ConfigWatcher [28445 SplunkConfigChangeWatcherThread] - Finding the deleted watched configuration files (while splunkd was down) completed in duration=0.127 secs 06-07-2023 18:37:31.362 +0000 INFO IndexerIf [28341 indexerPipe] - Asked to add or update bucket manifest values, bid=_audit~46~5C52B298-3A3B-4A82-9F95-B9738E1D9BFB 06-07-2023 18:37:31.438 +0000 INFO loader [27902 HTTPDispatch] - Limiting REST HTTP server to 21845 sockets 06-07-2023 18:37:31.438 +0000 INFO loader [27902 HTTPDispatch] - Limiting REST HTTP server to 161 threads 06-07-2023 18:37:31.438 +0000 WARN X509Verify [27902 HTTPDispatch] - X509 certificate (O=SplunkUser,CN=SplunkServerDefaultCert) should not be used, as it is issued by Splunk's own default Certificate Authority (CA). This puts your Splunk instance at very high-risk of the MITM attack. Either commercial-CA-signed or self-CA-signed certificates must be used; see: <http://docs.splunk.com/Documentation/Splunk/latest/Security/Howtoself-signcertificates> 06-07-2023 18:37:32.194 +0000 INFO UiHttpListener [28468 WebuiStartup] - Server supporting SSL versions TLS1.2 06-07-2023 18:37:32.194 +0000 INFO UiHttpListener [28468 WebuiStartup] - Using cipher suite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 06-07-2023 18:37:32.194 +0000 INFO UiHttpListener [28468 WebuiStartup] - Using ECDH curves : prime256v1, secp384r1, secp521r1 06-07-2023 18:37:32.197 +0000 WARN X509Verify [28468 WebuiStartup] - X509 certificate (O=SplunkUser,CN=ip-172-31-46-102.us-west-2.compute.internal) should not be used, as it is issued by Splunk's own default Certificate Authority (CA). This puts your Splunk instance at very high-risk of the MITM attack. Either commercial-CA-signed or self-CA-signed certificates must be used; see: <http://docs.splunk.com/Documentation/Splunk/latest/Security/Howtoself-signcertificates> 06-07-2023 18:37:32.197 +0000 INFO UiHttpListener [28468 WebuiStartup] - Limiting UI HTTP server to 21845 sockets 06-07-2023 18:37:32.197 +0000 INFO UiHttpListener [28468 WebuiStartup] - Limiting UI HTTP server to 161 threads 06-07-2023 18:37:32.251 +0000 INFO DatabaseDirectoryManager [28321 IndexerService] - idx=_audit writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/audit/db' pendingBucketUpdates=1 innerLockTime=0.000. Reason='IndexerService periodic manifest update' 06-07-2023 18:37:32.252 +0000 INFO DatabaseDirectoryManager [28321 IndexerService] - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/audit/db duration=0.001 06-07-2023 18:37:32.309 +0000 INFO ProxyConfig [28468 WebuiStartup] - Failed to initialize http_proxy from server.conf for splunkd. Please make sure that the http_proxy property is set as http_proxy=http://host:port in case HTTP proxying needs to be enabled. 06-07-2023 18:37:32.310 +0000 INFO ProxyConfig [28468 WebuiStartup] - Failed to initialize https_proxy from server.conf for splunkd. Please make sure that the https_proxy property is set as https_proxy=http://host:port in case HTTP proxying needs to be enabled. 06-07-2023 18:37:32.310 +0000 INFO ProxyConfig [28468 WebuiStartup] - Failed to initialize the proxy_rules setting from server.conf for splunkd. Please provide a valid set of proxy_rules in case HTTP proxying needs to be enabled. 06-07-2023 18:37:32.310 +0000 INFO ProxyConfig [28468 WebuiStartup] - Failed to initialize the no_proxy setting from server.conf for splunkd. Please provide a valid set of no_proxy rules in case HTTP proxying needs to be enabled. 06-07-2023 18:37:32.314 +0000 WARN SSLOptions [28468 WebuiStartup] - <internal>.conf/[<internal>]/sslVerifyServerCert is false disabling certificate validation; must be set to "true" for increased security 06-07-2023 18:37:32.414 +0000 WARN SSLOptions [28468 WebuiStartup] - <internal>.conf/[<internal>]/sslVerifyServerCert is false disabling certificate validation; must be set to "true" for increased security 06-07-2023 18:37:32.837 +0000 WARN SSLOptions [28394 SchedulerThread] - server.conf/[search_state]/sslVerifyServerCert is false disabling certificate validation; must be set to "true" for increased security 06-07-2023 18:37:32.999 +0000 WARN ProcessTracker [27894 MainThread] - (child_1__Fsck) SSLOptions - server.conf/[sslConfig]/sslVerifyServerCert is false disabling certificate validation; must be set to "true" for increased security 06-07-2023 18:37:34.574 +0000 INFO ExecProcessor [28362 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk-dashboard-studio/bin/save_image_and_icon_on_install.py" splunk-dashboard-studio version is 1.7.3 06-07-2023 18:37:34.575 +0000 INFO ExecProcessor [28362 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk-dashboard-studio/bin/save_image_and_icon_on_install.py" Content of /opt/splunk/etc/apps/splunk-dashboard-studio/kvstore_icon_status.conf is {'default': {'uploadedVersion': '1.7.3'}} 06-07-2023 18:37:34.575 +0000 INFO ExecProcessor [28362 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk-dashboard-studio/bin/save_image_and_icon_on_install.py" Icons of splunk-dashboard-studio version 1.7.3 are already stored in kvstore collection. Skipping now and exiting.  
Hello, I am struggling a bit with regex and field extractions. I need to write my own sourcetype because I haven't found anything pre-made for dnstap. Maybe I was blind and you have something ready... See more...
Hello, I am struggling a bit with regex and field extractions. I need to write my own sourcetype because I haven't found anything pre-made for dnstap. Maybe I was blind and you have something ready to hand. I have the following raw event text: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24094 ;; flags: qr aa rd ra ; QUESTION: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: 67816834b9432822c5a508fd59b65054fb5bbab0c5fe14f8 ;; QUESTION SECTION: ;www.test.aa. IN A ;; ANSWER SECTION: www.test.aa. 60 IN CNAME testserver.domain www.test.aa. 60 IN A 192.168.1.20 ;; AUTHORITY SECTION: test.aa. 60 IN NS localhost. I want to extract the "ANSWER SECTION", but my regex fails: ;;\sANSWER\sSECTION:\v(?<response_query>\S+)\s+(?<response_ttl>\S+)\s+(?<response_class>\S+)\s+(?<reponse_type>\S+)\s+(?<response>\S+) The problem is that only the first line of the section is captured, but I need to capture every line because I need all the values. The "ANSWER SECTION" can consist of one line or several lines.   I'm using regex101.com with the regex flags "multi line" and "single line" as described in props.conf -> EXTRACT-<class>.
Hello, Can we use AppDynamics to monitor SCOM, SPECTRUM, AZURE, etc? Require it for some scenarios Please advise.
I am currently trying to join two search queries together through the appendcols command in order to display two lines of data in a line graph. I am running across a error through the search that is ... See more...
I am currently trying to join two search queries together through the appendcols command in order to display two lines of data in a line graph. I am running across a error through the search that is being appended, that it is displaying the wrong data. This is the current search with appendcols added.   index=main host=* sourcetype=syslog process=elcsend "\"config " CentOS | rex "([^!]*!){2}(?P<type>[^!]*)!([^!]*!){4}(?P<role>[^!]*)!([^!]*!){23}(?P<vers>[^!]*)" | search role=std-dhcp | eval Total=sum(host) | timechart span=1d count by Total | rename NULL as CentOS | appendcols override=true [search index=os source=ps host=deml* OR host=sefs* OR host=ingg* OR host=us* OR host=gblc* NOT user=dcv NOT user=root NOT user=chrony NOT user=dbus NOT user=gdm NOT user=libstor+ NOT user=nslcd NOT user=polkitd NOT user=postfix NOT user=rpc NOT user=rpcuser NOT user=rtkit NOT user=colord NOT user=nobody NOT user=sgeadmin NOT user=splunk NOT user=setroub+ NOT user=lp NOT user=68 NOT user=ntp NOT user=smmsp NOT user=dcvsmagent NOT user=libstoragemgmt | timechart span=1d dc(user) as DCV]   This is the current result. DCV(furthest right) is displaying the wrong count as when the search is ran by itself, it shows around 300 and now once appended, it is only displaying 112 give or take. It is also not displaying over the period of days and only on the most recent day. I am not sure if something is wrong in the search itself or something to do with the fields overlapping.   
Hello, Curious has anyone tried using Data Variable within a particular metric? We intend to populate metric based on the value of a specific tier - This Tier will be captured from Data Variable. ... See more...
Hello, Curious has anyone tried using Data Variable within a particular metric? We intend to populate metric based on the value of a specific tier - This Tier will be captured from Data Variable. Metric which we intend to use: Application Infrastructure Performance|cpak-urx-service - sit|JVM|Process CPU Burnt (ms/min) Tried approach below - replacing tier name with Data Variable but got an error "Invalid Metric"  Application Infrastructure Performance|${Service_Name}|JVM|Process CPU Burnt (ms/min)
I am try add to my notable event in correlation search next step analyst need to take. I am see some issue. when I list next step action for analyst to take. I am getting my my next step action get... See more...
I am try add to my notable event in correlation search next step analyst need to take. I am see some issue. when I list next step action for analyst to take. I am getting my my next step action getting truncated in notable event in incident review page. step 1 and step 2 are in same line even after I separate them by line.
I need help sending logs to Splunk from GitLab. Could someone help me get started?
Hi, I am trying create tags based on index and field name .  Log: 1, User.field1, User.field2, User.field3 2, Admin.field1, Admin.field2, Admin.field3 3, Admin.field1, Admin.field2, Admin.fie... See more...
Hi, I am trying create tags based on index and field name .  Log: 1, User.field1, User.field2, User.field3 2, Admin.field1, Admin.field2, Admin.field3 3, Admin.field1, Admin.field2, Admin.field3 I want tag User.* fields with tag User and Admin.* with Admin. So, when we search with tag User only User events listed  Thanks
Hi, How can I achieve a web session timeout unlimited for a specific user? Best Regards, 
Is there a limit of number of forwarders that a single dedicated deployment server can connect with? We're trying to check what will be the vm specification (cpu/memory/disk/etc) that a dedicated d... See more...
Is there a limit of number of forwarders that a single dedicated deployment server can connect with? We're trying to check what will be the vm specification (cpu/memory/disk/etc) that a dedicated deployment server can still managed like for 5k forwarders, 10k, or 30k forwarders. If its possible or not for one server. I only found documentation about the estimated time-to-deploy calculations, but nothing about the vm-spec vs number-of-forwarders calculation.
Hi everyone,  I need to filter these events, but remove events related to RdrCEF.exe How to create an exception in inputs.conf with this Full File Path: C:\Program Files (x86)\Adobe\Acrobat Rea... See more...
Hi everyone,  I need to filter these events, but remove events related to RdrCEF.exe How to create an exception in inputs.conf with this Full File Path: C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\acrocef_1\RdrCEF.exe T Today my inputs.conf is:   [WinEventLog://Microsoft-Windows-AppLocker/EXE and DLL] checkpointInterval = 5 current_only = 0 disabled = 0 index = test1 start_from = oldest renderXml = 1 whitelist = 8000, 8004, 8007, 8008, 8029, 8032, 8035, 8036, 8040 blacklist1 = EventCode = "^8004$" Message = "\%PROGRAMFILES\%\\ADOBE\\ACROBAT\sREADER\sDC\\READER\\ACROCEF_1\\RDRCEF\.EXE" blacklist2 = EventCode = "^8004$" Message = "\%PROGRAMFILES\%\\ADOBE\\ACROBAT\sREADER\sDC\\READER\\ACROCEF_1\\RDRCEF\.EXE" blacklist3 = EventCode = "^8004$" Message = "\\RDRCEF\.EXE" blacklist4 = EventCode = "^8004$" Message = "*\\RDRCEF\.EXE" _TCP_ROUTING = test    
Hello Splunkers,  I recently deployed ES and went through a "proper' installation. I'm running into an issue with most dashboards and looking at logs this is what I see: "Error in 'SearchParser':... See more...
Hello Splunkers,  I recently deployed ES and went through a "proper' installation. I'm running into an issue with most dashboards and looking at logs this is what I see: "Error in 'SearchParser': The search specifies a macro 'cim_Network_Traffic_indexes' that cannot be found. Reasons include: the macro name is misspelled, you do not have "read" permission for the macro, or the macro has not been shared with this application. Click Settings, Advanced search, Search Macros to view macro information." When I go into the macro settings and click on the macro it takes me to the CIM Setup page AND the macro is enabled, spelled correctly, and exists! I've tried (and sometimes this works) to create my own macro with same name and global settings. But does not work 100% of the time.  Has anyone run into this? Thanks! Best, A
Hello, I have 2 index, one that received about 40 millions records per day and the other one about 80% of the first index.  I have 2 uniques fields in each index that allows me to merge the 2 index... See more...
Hello, I have 2 index, one that received about 40 millions records per day and the other one about 80% of the first index.  I have 2 uniques fields in each index that allows me to merge the 2 index.  Is it possible to merge the 2 index before ingestion?  Because if I do mentionne index A OR index B with an eval after it, it's working but I have to limit the period to not more than  a couple of hour or else it take a lottttt of time before getting the result.  Plus if I want to limit the result by having a selection on other criteria, I need to merge them before aplying those criteria because the information is devided into the 2 index.   I have been trying to figure out this one for months now with a lot of trial and error that is why i'm giving a shot here.   Thanks!     
Hello Team, We have some special symbols which are in our application like Ω, µA etc.. We are able to export data from application successfully and push it to Splunk and able to create report suc... See more...
Hello Team, We have some special symbols which are in our application like Ω, µA etc.. We are able to export data from application successfully and push it to Splunk and able to create report successfully. In Splunk report we are able to see the symbols properly without any issue but when we download the data into a csv and see the symbols it is converting into different symbols. Can somebody help us how we can solve this Thanks, Reddy
Hi, we're already using LDAP in order to access to Splunk, but now we need to "switch" to LDAPS. I've read the port needs to be 636 and "SSL enabled" flagged, but the message "You must also have SS... See more...
Hi, we're already using LDAP in order to access to Splunk, but now we need to "switch" to LDAPS. I've read the port needs to be 636 and "SSL enabled" flagged, but the message "You must also have SSL enabled on your LDAP server" confuses me. Should I do something else? Do I need to change .pem certificate? If so, on the cluster master? i've found this conf file: $SPLUNK_HOME/etc/openldap/certs/ldap.conf, I guess I also need to modify it.   Thank you in advance for any help you can offer.    
Hi, I am using Dashboard Studio to build my dashboard. When loading a timechart containing a small set of events ( ~200), I frequently get a "Search <id> not found. the search may have been cancelle... See more...
Hi, I am using Dashboard Studio to build my dashboard. When loading a timechart containing a small set of events ( ~200), I frequently get a "Search <id> not found. the search may have been cancelled while there are still subscribers" error and the chart is not rendered accordingly.  It does work if I click on the chart's refresh button, but that is a very bad user experience. The timechart's data source follows the pattern below. My dashboard has two of them.   index = <index> sourcetype = <sourcetype> source = <source> host = <host> | multikv fields <fieldA> <fieldB> filter <condition> | timechart avg(fieldA) avg(fieldB)   Best regards, Gustavo
Hello everyone, I am having the syslog files from my Cisco Callmanager stored in my Ubuntu 22.04 using rsyslog in the path /home/splunk/syslog/. I have set up a corresponding input in my Splunk Ent... See more...
Hello everyone, I am having the syslog files from my Cisco Callmanager stored in my Ubuntu 22.04 using rsyslog in the path /home/splunk/syslog/. I have set up a corresponding input in my Splunk Enterprise: [batch:///home/splunk/cdr/cdr_*] index = cisco_cdr move_policy = sinkhole sourcetype = cucm_cdr disabled = false [batch:///home/splunk/cdr/cmr_*] crcSalt = <SOURCE> index = cisco_cdr move_policy = sinkhole sourcetype = cucm_cmr disabled = false [monitor:///home/splunk/syslog/*] disabled = false index = cisco_syslog sourcetype = cucm_syslog move_policy = sinkhole The syslogs are being ingested into the database, but the files are not being deleted. Does anyone have an idea of what I might have done wrong?
Hi, i have a set of sourcetypes in a lookup. Now my spl has to look for the latest timestamp for all the sourcetypes in the last one hour and report for the sourcetype for which the events are no... See more...
Hi, i have a set of sourcetypes in a lookup. Now my spl has to look for the latest timestamp for all the sourcetypes in the last one hour and report for the sourcetype for which the events are not available in the last one hour