All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am fairly new to Splunk. I am testing out different search queries and getting  inconsistent results.  In this example I have some pretty simple json/logs with the following format { "data": { ... See more...
I am fairly new to Splunk. I am testing out different search queries and getting  inconsistent results.  In this example I have some pretty simple json/logs with the following format { "data": { "tree": { "fruit": { "type": "Pear" } } } }   I'm trying several different searches and seeing some unexpected results. "data.tree.fruit.type"="Apple" - Returns Apple only results (as expected) *| spath "data.tree.fruit.type" | search "data.tree.fruit.type"=Apple - Returns Apple only results (as expected) "data.tree.fruit.type"="Pear" - Returns NO results (unexpected?) *| spath "data.tree.fruit.type" | search "data.tree.fruit.type"=Pear - Returns Pear only results (as expected) "data.tree.fruit.type"="*" - Returns Apple only results (unexpected) Can anyone shed some light on why I'm seeing the varying results?
Regard the second alert - I think that looks like it should fire, although there is no supression enabled and the crontab is set to run every minute - so you might find you get a lot of alerts!  Its... See more...
Regard the second alert - I think that looks like it should fire, although there is no supression enabled and the crontab is set to run every minute - so you might find you get a lot of alerts!  Its worth checking in the _audit index to see if the search is executing successfully and if it alerts. Also, have you checked your Spam folder incase the emails have ended up there? Have you previously been able to send an email from Splunk and received it in your inbox? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
If I have a transforms.conf like the below: [ORIGIN2] REGEX = (?:"id":"32605") FORMAT = sourcetype::test-2 DEST_KEY = MetaData:Sourcetype [aa] REGEX = . DEST_KEY = queue FORMAT = nullQueue [... See more...
If I have a transforms.conf like the below: [ORIGIN2] REGEX = (?:"id":"32605") FORMAT = sourcetype::test-2 DEST_KEY = MetaData:Sourcetype [aa] REGEX = . DEST_KEY = queue FORMAT = nullQueue [bb] REGEX =(?=.*successfully) DEST_KEY = queue FORMAT = indexQueue   and I call the props like the following: [test] TRANSFORMS-rename_sourcetype = ORIGIN2 SHOULD_LINEMERGE = false EVAL-ok = "ok" [aslaof:test-2] EVAL-action2 = "whatt" TRANSFORMS-eliminate_unwanted_data = aa,bb EVAL-action = "nooooo"   I cant seem to figure out why Im not allowed to perform a transform on my newly created sourcetype. Oddly, Splunk registers my 2 EVAL commands, but my transforms are not performed. Am I not allowed to perform transforms on a sourcetype I just created? Also tried combining the initial transform that creates the sourcetype into one piece: REGEX = (?=.*"id":"32605")(?=.*successfully), but this does not seem to work either.   
I deployed a Splunk AppDynamics on-premises and a Splunk Enterprise on-premises for a demo. And I have followed along with this tutorial to create the integration user How To - Enable Log Observer Co... See more...
I deployed a Splunk AppDynamics on-premises and a Splunk Enterprise on-premises for a demo. And I have followed along with this tutorial to create the integration user How To - Enable Log Observer Connect For Splunk AppDynamics and Splunk Core Integration - YouTube. I also tried following along with this one How to Deploy Log Observer Connect for AppDynamics - Splunk Community. But still no clue in achieving the integration.  However, after adding the username, password, and required parameters in Splunk AppDynamics, I get that  Error: The Splunk integration service is not available.   Any Ideas how to solve this issue?
Hi @SPLAUR  it’s generally not advised to use real time searching. Scheduled is much better on your system! Also, you have the value of “1” in the supress fields box on the first alert but this sho... See more...
Hi @SPLAUR  it’s generally not advised to use real time searching. Scheduled is much better on your system! Also, you have the value of “1” in the supress fields box on the first alert but this should probably be “src”  
I am deploying an On-Premises AppDynamics demo for a customer version (25.1.1.10031) and it is running on HTTP (8090). However, when I try to open it https://<ip_addr>:8181, I get the attached error ... See more...
I am deploying an On-Premises AppDynamics demo for a customer version (25.1.1.10031) and it is running on HTTP (8090). However, when I try to open it https://<ip_addr>:8181, I get the attached error message. SNI The screenshot has appd not the IP address just to hide it How do I bypass this error?
Dear Splunk community, I have a search in Splunk that generates results: index="myindex" message_id="AU2" | stats count by src | search count > 2 It basically searches the index for events of type... See more...
Dear Splunk community, I have a search in Splunk that generates results: index="myindex" message_id="AU2" | stats count by src | search count > 2 It basically searches the index for events of type "AU2" and shows an alert when they are greater than 2. I have created several alerts with different modes: Real-time Mode   Scheduled Mode   When I run: index=_internal sourcetype=scheduler savedsearch_name="PRUEBA Scheduled" It shows the following:   Could you tell me what I might be doing wrong or what I might be missing? Regards.  
Hello! I am using Dashboard Studio. I created an Events visualization that is currently in the List view. I want to make it into the Table view. This is the source code for the Dashboard Studio dashb... See more...
Hello! I am using Dashboard Studio. I created an Events visualization that is currently in the List view. I want to make it into the Table view. This is the source code for the Dashboard Studio dashboard:   { containerOptions: {}, "context": {}, "dataSources": { "primary": ds # the query is index=* | table field1, field2, field3 }, "options": {}, "showLastUpdated": false, "showProgressBar": true, "type": "splunk.events" }     I created something similar to what I'm looking for in Dashboard Classic and this is the code:   <panel> <event> <search> <query>index=*</query> <earliest>$global_time.earliest$</earliest> <latest>$global_time.latest$</latest> </search> <fields>field, field2, field3</fields> <option name="type">table</option> </event> </panel>  
I figured it out! I added the fields: <panel> <event> <search> <query>index=*</query> <earliest>$global_time.earliest$</earliest> <latest>$global_time.latest$</latest> </search> <fie... See more...
I figured it out! I added the fields: <panel> <event> <search> <query>index=*</query> <earliest>$global_time.earliest$</earliest> <latest>$global_time.latest$</latest> </search> <fields>field, field2, field3</fields> <option name="type">table</option> </event> </panel>
I found a right way, but i dont know how to reset search for another try.   index=sysmon_wec AND (EventCode=22 OR event_id=22) | makemv tokenizer="([^\r\n]+)(\r\n)?" User ... See more...
I found a right way, but i dont know how to reset search for another try.   index=sysmon_wec AND (EventCode=22 OR event_id=22) | makemv tokenizer="([^\r\n]+)(\r\n)?" User | mvexpand User | where NOT (User="SYSTEM" OR User="NT AUTHORITY\SYSTEM" OR User="NT AUTHORITY\NETWORK SERVICE" OR User="NT AUTHORITY\LOCAL SERVICE") | eval proc_filter=if(len("$procname$") > 0 , 1, 0) | eval user_filter=if(len("$user$") > 5, 1, 0) | where (proc_filter=1 AND process_name="$procname$" AND user_filter=0) OR (proc_filter=1 AND process_name="$procname$" AND User="$user$") | head 100 | table process_name, User, ComputerName, QueryName, QueryResults  
Hello! Thank you for your response. I tried that. However, the Selected Field headers are still just _time, host, source, and sourcetype. The footer Event Fields that appear when I expand the event d... See more...
Hello! Thank you for your response. I tried that. However, the Selected Field headers are still just _time, host, source, and sourcetype. The footer Event Fields that appear when I expand the event do include the fields that I put in the table command (ex: Level, Details, etc), but don't appear as the headers or the Selected Fields.
Try using the table command in your query index=* | table _time, host, source, sourcetype, otherfield1, otherfield2
I am creating a Classic Dashboard. I have a Events Panel that is in the Table format. The headers for the table are the following events: _time, host, source, and sourcetype. These are the "Selected ... See more...
I am creating a Classic Dashboard. I have a Events Panel that is in the Table format. The headers for the table are the following events: _time, host, source, and sourcetype. These are the "Selected Fields". However, there are other fields that I would like to include as "Selected" so that they show up on the header. Is there anyway to do that? <panel> <event> <search> <query>index=*</query> <earliest>$global_time.earliest$</earliest> <latest>$global_time.latest$</latest> </search> <option name="type">table</option> </event> </panel>  
I have migrated to 9.4.1.   I initially I had certificate issues, which have been resolved. kv store still fails to start however Outside the error below (Failed to connect to target host: ip-10-34-... See more...
I have migrated to 9.4.1.   I initially I had certificate issues, which have been resolved. kv store still fails to start however Outside the error below (Failed to connect to target host: ip-10-34-2-203:8191) there are    /opt/splunk/bin/splunk show kvstore-status --verbose WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. This member: backupRestoreStatus : Ready disabled : 0 featureCompatibilityVersion : An error occurred during the last operation ('getParameter', domain: '15', code: '13053'): No suitable servers found: `serverSelectionTimeoutMS` expired: [Failed to connect to target host: ip-10-34-2-203:8191] guid : 4059932D-D941-4186-BE08-6B6426B618CB port : 8191 standalone : 1 status : failed storageEngine : wiredTiger   mongodb.log   2025-03-11T15:46:17.377Z I CONTROL [initandlisten] MongoDB starting : pid=2570573 port=8191 dbpath=/opt/splunk/var/lib/splunk/kvstore/mongo 64-bit host=ip-10-34-2-203 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] db version v4.2.25 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] git version: 41b59c2bfb5121e66f18cc3ef40055a1b5fb6c2e 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2zk-fips 3 Sep 2024 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] allocator: tcmalloc 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] modules: enterprise 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] build environment: 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] distmod: rhel70 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] distarch: x86_64 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] target_arch: x86_64 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] options: { net: { bindIp: "0.0.0.0", port: 8191, tls: { CAFile: "opt/splunk/etc/auth/cacert.pem", allowConnectionsWithoutCertificates: true, allowInvalidHostnames: true, certificateKeyFile: "/opt/splunk/etc/auth/server.pem", certificateKeyFilePassword: "<password>", disabledProtocols: "noTLS1_0,noTLS1_1", mode: "requireTLS", tlsCipherConfig: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RS..." }, unixDomainSocket: { enabled: false } }, replication: { oplogSizeMB: 200, replSet: "4059932D-D941-4186-BE08-6B6426B618CB" }, security: { clusterAuthMode: "sendX509", javascriptEnabled: false, keyFile: "/opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key" }, setParameter: { enableLocalhostAuthBypass: "0", oplogFetcherSteadyStateMaxFetcherRestarts: "0" }, storage: { dbPath: "/opt/splunk/var/lib/splunk/kvstore/mongo", engine: "wiredTiger", wiredTiger: { engineConfig: { cacheSizeGB: 2.25 } } }, systemLog: { timeStampFormat: "iso8601-utc" } } 2025-03-11T15:46:19.083Z I CONTROL [initandlisten] ** WARNING: This server will not perform X.509 hostname validation 2025-03-11T15:46:19.083Z I CONTROL [initandlisten] ** This may allow your server to make or accept connections to 2025-03-11T15:46:19.083Z I CONTROL [initandlisten] ** untrusted parties 2025-03-11T15:46:19.102Z I REPL [initandlisten] Rollback ID is 1 2025-03-11T15:46:19.103Z I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset 2025-03-11T15:46:19.122Z I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured 2025-03-11T15:46:19.129Z I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist 2025-03-11T15:46:19.135Z I NETWORK [listener] Listening on 0.0.0.0 2025-03-11T15:46:19.135Z I NETWORK [listener] waiting for connections on port 8191 ssl 2025-03-11T15:46:19.298Z I NETWORK [listener] connection accepted from 10.34.2.203:56880 #1 (1 connection now open) 2025-03-11T15:46:19.300Z I NETWORK [conn1] end connection 10.34.2.203:56880 (0 connections now open)     server.conf   [general] pass4SymmKey = serverName = splunk1 [sslConfig] serverCert = /opt/splunk/etc/auth/server.pem sslRootCAPath = opt/splunk/etc/auth/cacert.pem enableSplunkdSSL = true sslVersions = tls1.2 sslPassword = <yada yada yada> [kvstore] storageEngine = wiredTiger serverCert = /opt/splunk/etc/auth/server.pem sslRootCAPath = opt/splunk/etc/auth/cacert.pem sslVerifyServerCert = true sslVerifyServerName = true sslPassword = <yada yada yada>     serverd.log When i grep for hostname   root@ip-10-34-2-203:~# grep ip-10-34-2-203 /opt/splunk/var/log/splunk/splunkd.log 03-11-2025 12:48:48.418 +0000 INFO ServerConfig [0 MainThread] - My hostname is "ip-10-34-2-203". 03-11-2025 12:48:48.466 +0000 INFO loader [2492128 MainThread] - System info: Linux, ip-10-34-2-203, 5.15.0-1077-aws, #84~20.04.1-Ubuntu SMP Mon Jan 20 22:14:54 UTC 2025, x86_64. 03-11-2025 12:49:01.958 +0000 INFO PubSubSvr [2492128 MainThread] - Subscribed: channel=deploymentServer/phoneHome/default connectionId=connection_127.0.0.1_8089_ip-10-34-2-203_direct_ds_default listener=0x7f2a306bfa00 03-11-2025 12:49:01.958 +0000 INFO PubSubSvr [2492128 MainThread] - Subscribed: channel=deploymentServer/phoneHome/default connectionId=connection_127.0.0.1_8089_ip-10-34-2-203_direct_ds_default listener=0x7f2a306bfa00 03-11-2025 12:49:01.958 +0000 INFO PubSubSvr [2492128 MainThread] - Subscribed: channel=deploymentServer/phoneHome/default/metrics connectionId=connection_127.0.0.1_8089_ip-10-34-2-203_direct_ds_default listener=0x7f2a306bfa00 03-11-2025 12:49:01.958 +0000 INFO PubSubSvr [2492128 MainThread] - Subscribed: channel=tenantService/handshake connectionId=connection_127.0.0.1_8089_ip-10-34-2-203_direct_tenantService listener=0x7f2a306bfc00 03-11-2025 13:44:36.801 +0000 ERROR KVStorageProvider [2493368 TcpChannelThread] - An error occurred during the last operation ('collectionStats', domain: '15', code: '13053'): No suitable servers found: `serverSelectionTimeoutMS` expired: [Failed to connect to target host: ip-10-34-2-203:8191] 03-11-2025 13:44:36.801 +0000 ERROR CollectionConfigurationProvider [2493368 TcpChannelThread] - Failed to get collection stats for collection="era_email_notification_switch" with error: No suitable servers found: `serverSelectionTimeoutMS` expired: [Failed to connect to target host: ip-10-34-2-203:8191] 03-11-2025 14:03:17.838 +0000 ERROR KVStorageProvider [2493425 TcpChannelThread] - An error occurred during the last operation ('getParameter', domain: '15', code: '13053'): No suitable servers found: `serverSelectionTimeoutMS` expired: [Failed to connect to target host: ip-10-34-2-203:8191] 03-11-2025 14:03:17.842 +0000 ERROR KVStorageProvider [2493425 TcpChannelThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'ip-10-34-2-203:8191']     These are other errors I noticed that might be related   03-11-2025 14:37:31.298 +0000 ERROR X509Verify [2538813 ApplicationUpdateThread] - Server X509 certificate (CN=DigiCert Global G2 TLS RSA SHA256 2020 CA1,O=DigiCert Inc,C=US) failed validation; error=20, reason="unable to get local issuer certificate" 03-11-2025 14:37:31.298 +0000 WARN SSLCommon [2538813 ApplicationUpdateThread] - Received fatal SSL3 alert. ssl_state='error', alert_description='unknown CA'. 03-11-2025 14:37:31.298 +0000 WARN HttpClientRequest [2538813 ApplicationUpdateThread] - Returning error HTTP/1.1 502 Error connecting: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name. 03-11-2025 14:37:31.298 +0000 ERROR ApplicationUpdater [2538813 ApplicationUpdateThread] - Error checking for update, URL=https://apps.splunk.com/api/apps:resolve/checkforupgrade: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name. 03-11-2025 14:38:57.211 +0000 ERROR KVStoreConfigurationProvider [2536490 KVStoreConfigurationThread] - Failed to start mongod on first attempt reason=Failed to receive response from kvstore error=, service not ready after waiting for timeout=301389ms 03-11-2025 14:38:57.211 +0000 ERROR KVStoreConfigurationProvider [2536490 KVStoreConfigurationThread] - Could not start mongo instance. Initialization failed. 03-11-2025 14:38:57.211 +0000 WARN KVStoreConfigurationProvider [2536490 KVStoreConfigurationThread] - Action scheduled, but event loop is not ready yet 03-11-2025 14:38:57.211 +0000 ERROR KVStoreBulletinBoardManager [2536490 KVStoreConfigurationThread] - KV Store changed status to failed. Failed to start KV Store process. See mongod.log and splunkd.log for details.. 03-11-2025 14:38:57.211 +0000 ERROR KVStoreBulletinBoardManager [2536490 KVStoreConfigurationThread] - Failed to start KV Store process. See mongod.log and splunkd.log for details. 03-11-2025 14:38:57.211 +0000 INFO KVStoreConfigurationProvider [2536490 KVStoreConfigurationThread] - Mongod service shutting down  
One of our timecharts showed "future" time (by one hour) on the x-axis.  Turns out the server time was off by one hour.
Well... then it should work. One thing you could change in your spec is dropping the conditionality at the end (you should never have the directory specified as the source,  just files from below th... See more...
Well... then it should work. One thing you could change in your spec is dropping the conditionality at the end (you should never have the directory specified as the source,  just files from below this directory) but that's not the issue here. I noticed one thing though - a similar case as we had not long ago in another thread - your transform class has a name "null". That is a fairly common name so it might be getting overriden somewhere else in your configs. See the btool output if it isn't.
Hi @dolj  You should be able to use the streamstats command, which allows you to perform operations on a stream of events and group them by a specific field. In your case, you want to calculate the ... See more...
Hi @dolj  You should be able to use the streamstats command, which allows you to perform operations on a stream of events and group them by a specific field. In your case, you want to calculate the difference in scores for tests with the same test_name but different test_id. Here's how you can do it: Use the streamstats command to calculate the difference in scores for each test_name. Use the by clause to group the calculations by test_name. Here's a Splunk search query that should accomplish this: | your_search_here | sort test_name, test_id | streamstats current=f last(Score) as previous_score by test_name | eval Drift = if(isnull(previous_score), null(), Score - previous_score) | table test_name, test_id, Score, Drift Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
I am trying to find a way to compare the results listed in a table to each other.  Basically the table lists the results of many different test where some test have the same names, but have been run... See more...
I am trying to find a way to compare the results listed in a table to each other.  Basically the table lists the results of many different test where some test have the same names, but have been run and rerun so they have same test_names but different test_IDs. something like this test_name test_id Score Drift test 1 .98 100   test 1 .99 98 -2 test 1 1.00 100 2 test 2  .01 30   test 3 0.54 34   test 3 0.55 76 42    I am looking for a way to take the score from line one and have some sort of logic that will look at the result of the next line and if the test has the sane test_name BUT a different test_ID it will subtract the first lines score from the second lines score and continue along until the (for example) the next line has a different test_name and it skips the line until it finds another line where the following line have the same test_name. and it continues on until all scores are compared.  The delta command almost works but I need to have the ability to say BY test_name  something like      | delta Score as Drift by test_name     unfortunately delta doesn't accept by clauses I am trying to find a way to calculate the drift column using Splunk so I can create a detection where the drift exceeds a specific threshold.   
Yes, it's working correctly. For example, I am reindexing /var/log/syslog to index=os_logs, and it applies as expected.
I am able to see the KPI logging the alert value accurately for this service. I just dont see the alert value being reflected in the graph for thresholding.