All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

And can I try giving kv_mode = JSON just to check my luck? What will be the consequences if it don't work? Please guide me through steps....
Hi everyone,  I've revently tested the new Splunk AI feature within Splunk ITSI to define thresholds based on historic Data/KPI points. ("Test" as in I literally created very obvious dummy-data for ... See more...
Hi everyone,  I've revently tested the new Splunk AI feature within Splunk ITSI to define thresholds based on historic Data/KPI points. ("Test" as in I literally created very obvious dummy-data for the AI to process and find thresholds for. Sort of Trust test of the AI really does find usuable thresholds. ) Example:  Every 5 minutes the KPI takes the latest value which I've set to correspond with the current weekday (+ minimal variance) For example: All KPI values on Mondays are within the range of 100-110, Tuesdays 200-210, Wednesdays 300-310 and so forth.  This is a preview of the data:  Now after a successful backfill of 30 days I would have expected the AI to see that each weekday needs its own time policy and thresholds.  However the result was this:  No weekdays detected, and instead it finds time policies for every 4hours regardless of days?  By now I've tried all possible adjustments I could think of (increasing the number of data points, greater differences between data points, other algorithmn, waiting for the next in hopes it would recalibrate itself over midnight, etc.) Hardly any improments at all and the thresholds are not usuable like this as it would not be able to detect outliers on mondays (expected values 100-110, outlier would 400 but not detected as it's still within thresholds. Thus my question to the community: Does anyone have some ideas/suggestions how I could make the AI understand the simple idea of "weekly time policies" and how I could tweak it? (Aside from doing everything manually and ditching the AI-Idea as a whole)?  Does anyone have good experience with Splunk AI defining Thresholds and if so what were the use cases?
Yes it's the latter case. But search query I mentioned above (spath) is working perfectly. Is there any way I can achieve this? If this is not possible, can I make macro of that query and use it in s... See more...
Yes it's the latter case. But search query I mentioned above (spath) is working perfectly. Is there any way I can achieve this? If this is not possible, can I make macro of that query and use it in search query ? I don't know how customer feels to it.     
I know it's a json. But is it the whole event? Or does the event have additional pieces? So does the event look like this: { "a":"b", "c":"d" } or more like this <12>Nov 12 20:15:12 localhost what... See more...
I know it's a json. But is it the whole event? Or does the event have additional pieces? So does the event look like this: { "a":"b", "c":"d" } or more like this <12>Nov 12 20:15:12 localhost whatever: data={"a":"b","c":"d"} and you only want the json part parsed? In the former case, it's enough to set KV_MODE to json (but KV_MODE=json doesn't handle multilevel field names). If it's the latter - that's the situation I described - Splunk cannot handle the structured _part_ automatically.
Hi @PickleRick , It's a structured json query we have and it is not extracting field values automatically. Everytime we need to give command in search which is not the customer wants. They want this... See more...
Hi @PickleRick , It's a structured json query we have and it is not extracting field values automatically. Everytime we need to give command in search which is not the customer wants. They want this extraction to be default. 
Yes, and the issues follows over to the cloned entry.
Unfortunately, at this moment Splunk can only do automatic structured data extraction if the whole event is well-formed structured data. So if your whole event is a json blob - Splunk can interpret i... See more...
Unfortunately, at this moment Splunk can only do automatic structured data extraction if the whole event is well-formed structured data. So if your whole event is a json blob - Splunk can interpret it automatically. If it isn't because it contains some header or footer, it's a no-go. There is an open idea about this on ideas.splunk.com - https://ideas.splunk.com/ideas/EID-I-208 Feel free to upvote it. For now all you can do is to trim your original event to contain only the json part. (But then you might lose some data, I know).
Description: Hello, I am experiencing an issue with the "event_id" field when transferring notable events from Splunk Enterprise Security (ES) to Splunk SOAR. Details: When sending the event to ... See more...
Description: Hello, I am experiencing an issue with the "event_id" field when transferring notable events from Splunk Enterprise Security (ES) to Splunk SOAR. Details: When sending the event to SOAR using an Adaptive Response Action (Send to SOAR), the event is sent successfully, but the "event_id" field does not appear in the data received in SOAR. Any assistance or guidance to resolve this issue would be greatly appreciated. Thank you
Yes, this is the way. Thanks @ITWhisperer  this is exactly what I was looking for.
Hi @pumphreyaw , @mattymo  Now I am stuck in same problem. We don't have HF actually. We have deployment server which push apps to our manager and deployer. From there manager will push apps to peer... See more...
Hi @pumphreyaw , @mattymo  Now I am stuck in same problem. We don't have HF actually. We have deployment server which push apps to our manager and deployer. From there manager will push apps to peers nodes. We have 3 search heads and a deployer.  Where I need to give these configurations to extract json data? Can you please help me step by step?
It shouldn't hurt. If you escape something that doesn't need escaping, nothing bad should happen. It's just ugly.
Hello @dbray_sd  Have you tried by cloning older input and creating new one ? Sometimes checkpoint fails during upgrade but cloning new input will create checkpoint and possibly resolves your issue.
There can be multiple reason behind streamfwd.exe not running, you should file support case to get this fixed.
Hi, Not sure if you've tried this but looks like a similar issue in a lower version upgrade. https://community.splunk.com/t5/Installation/After-upgrading-from-Splunk-6-2-3-to-6-3-0-why-am-I-getti... See more...
Hi, Not sure if you've tried this but looks like a similar issue in a lower version upgrade. https://community.splunk.com/t5/Installation/After-upgrading-from-Splunk-6-2-3-to-6-3-0-why-am-I-getting/m-p/252282 Cheers Meaf
No where I need to specify this? What this query will do? Please explain
Hi, We are running a Splunk Enterprise HWF with a generic s3 input to fetch object from a s3 bucket, however each time we try to move this input onto a new identical HWF we have issues getting the... See more...
Hi, We are running a Splunk Enterprise HWF with a generic s3 input to fetch object from a s3 bucket, however each time we try to move this input onto a new identical HWF we have issues getting the same data from the same bucket. Both instances are on Splunk 9.2 however the Splunk AWS TA versions are different. Both are pipeline managed so have all the same config / certs. The only difference we can see if that in the aws ta input log the 'broken' input never creates the S3 Connection before fetching the s3 objects and seems to think the bucket is empty. Working input 2025-01-15 10:25:09,124 level=INFO pid=5806 tid=Thread-6747 logger=splunk_ta_aws.common.aws_credentials pos=aws_credentials.py:load:162 | bucket_name="bucketname" datainput="input", start_time=1736918987 job_uid="8888", phase="fetch_key" | message="load credentials succeed" arn="AWSARN" expiration="2025-01-15 11:25:09+00:00" 2025-01-15 10:25:09,125 level=INFO pid=5806 tid=Thread-6747 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_get_bucket:364 | bucket_name="bucketname" datainput="input", start_time=1736918987 job_uid="8888", phase="fetch_key" | message="Create new S3 connection." 2025-01-15 10:25:09,130 level=INFO pid=5806 tid=Thread-6841 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=s3_key_processer.py:_do_index:148 | bucket_name="bucketname" datainput="input" last_modified="2025-01-15T04:00:41.000Z" phase="fetch_key" job_uid="8888" start_time=1736918987 key_name="bucketobject" | message="Indexed S3 files." size=819200 action="index" Broken input 2025-01-15 12:00:33,369 level=INFO pid=3157753 tid=Thread-4 logger=splunk_ta_aws.common.aws_credentials pos=aws_credentials.py:load:217 | datainput="input" bucket_name="bucketname", start_time=1736942432 job_uid="8888", phase="fetch_key" | message="load credentials succeed" arn="AWSARN" expiration="2025-01-15 13:00:33+00:00" 2025-01-15 12:00:33,373 level=INFO pid=3157753 tid=Thread-4 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_fetch_keys:378 | datainput="input" bucket_name="bucketname", start_time=1736942432 job_uid="88888", phase="fetch_key" | message="End of fetching S3 objects." pending_key_total=0 Unsure, where to go from here as we have tried this on multiple new machines.  Thanks Meaf
Yes, I noticed the excess escaping too, but if it's incorrect then the SC4S suggested config is wrong too. Either way, I will try it both ways, and get back to you. Thank you so much for your help!
Hi, We have the same issue here. Upgraded from Splunk Ent. v9.3.2 to V9.40 , running Windows 2019 server. The Kvstore process not running also effect on Splunk Secure Gateway (SSG/Splunk Mobile), Da... See more...
Hi, We have the same issue here. Upgraded from Splunk Ent. v9.3.2 to V9.40 , running Windows 2019 server. The Kvstore process not running also effect on Splunk Secure Gateway (SSG/Splunk Mobile), Dastboard Studio, (and i think Edge Hub etc). Yes, looked in mongod.log and splunkd.log but not a bit wiser! See below some lines in my mongod.log : targetMinOS: Windows 7/Windows Server 2008 R2 - ??? this build only supports versions up to 4, and the file is version 5: - ??   2025-01-15T14:45:22.046Z I CONTROL [initandlisten] MongoDB starting : pid=2224 port=8191 dbpath=D:\Program Files\Splunk\var\lib\splunk\kvstore\mongo 64-bit host=Gozer2 2025-01-15T14:45:22.046Z I CONTROL [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2 2025-01-15T14:45:22.046Z I CONTROL [initandlisten] db version v4.2.24 2025-01-15T14:45:22.046Z I CONTROL [initandlisten] git version: 5e4ec1d24431fcdd28b579a024c5c801b8cde4e2 2025-01-15T14:45:22.046Z I CONTROL [initandlisten] allocator: tcmalloc 2025-01-15T14:45:22.046Z I CONTROL [initandlisten] modules: enterprise 2025-01-15T14:45:22.046Z I CONTROL [initandlisten] build environment: 2025-01-15T14:45:22.046Z I CONTROL [initandlisten] distmod: windows-64 2025-01-15T14:45:22.046Z I CONTROL [initandlisten] distarch: x86_64 2025-01-15T14:45:22.046Z I CONTROL [initandlisten] target_arch: x86_64 2025-01-15T14:45:22.047Z I CONTROL [initandlisten] options: { net: { bindIp: "0.0.0.0", port: 8191, tls: { CAFile: "D:\Program Files\Splunk\etc\auth\cacert.pem", allowConnectionsWithoutCertificates: true, allowInvalidCertificates: true, allowInvalidHostnames: true, certificateSelector: "subject=SplunkServerDefaultCert", disabledProtocols: "noTLS1_0,noTLS1_1", mode: "requireTLS", tlsCipherConfig: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RS..." } }, replication: { oplogSizeMB: 200, replSet: "102D93C2-E5B9-4347-88CA-59FB829D92E1" }, security: { javascriptEnabled: false, keyFile: "D:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\splunk.key" }, setParameter: { enableLocalhostAuthBypass: "0", oplogFetcherSteadyStateMaxFetcherRestarts: "0" }, storage: { dbPath: "D:\Program Files\Splunk\var\lib\splunk\kvstore\mongo", engine: "wiredTiger", wiredTiger: { engineConfig: { cacheSizeGB: 4.65 } } }, systemLog: { timeStampFormat: "iso8601-utc" } } 2025-01-15T14:45:22.048Z W NETWORK [initandlisten] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. 2025-01-15T14:45:22.048Z W NETWORK [initandlisten] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. 2025-01-15T14:45:22.049Z I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=4761M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress], 2025-01-15T14:45:22.083Z E STORAGE [initandlisten] WiredTiger error (-31802) [1736952322:82769][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1736952322:82769][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-15T14:45:22.100Z E STORAGE [initandlisten] WiredTiger error (-31802) [1736952322:100690][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1736952322:100690][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-15T14:45:22.116Z E STORAGE [initandlisten] WiredTiger error (-31802) [1736952322:115624][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1736952322:115624][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-15T14:45:22.150Z E STORAGE [initandlisten] WiredTiger error (-31802) [1736952322:149476][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1736952322:149476][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-15T14:45:22.175Z E STORAGE [initandlisten] WiredTiger error (-31802) [1736952322:175362][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1736952322:175362][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-15T14:45:22.179Z W STORAGE [initandlisten] Failed to start up WiredTiger under any compatibility version. 2025-01-15T14:45:22.179Z F STORAGE [initandlisten] Reason: -31802: WT_ERROR: non-specific WiredTiger error 2025-01-15T14:45:22.179Z F - [initandlisten] Fatal Assertion 28595 at src\mongo\db\storage\wiredtiger\wiredtiger_kv_engine.cpp 928 2025-01-15T14:45:22.179Z F - [initandlisten] \n\n***aborting after fassert() failure\n\n   Some lines from my Slunkd.log: 01-15-2025 15:57:57.139 +0100 INFO TailReader [7248 tailreader0] - Batch input finished reading file='D:\Program Files\Splunk\var\spool\splunk\tracker.log' 01-15-2025 15:57:57.467 +0100 ERROR KVStorageProvider [5552 TcpChannelThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:57:57.467 +0100 ERROR KVStoreAdminHandler [5552 TcpChannelThread] - An error occurred. 01-15-2025 15:58:03.592 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:58:10.645 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:58:17.723 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:58:24.745 +0100 WARN ExecProcessor [10156 ExecProcessor] - message from ""D:\Program Files\Splunk\bin\splunk-regmon.exe"" BundlesUtil - D:\Program Files\Splunk\etc\system\metadata\local.meta already exists but with different casing: D:\Program Files\splunk\etc\system\metadata\local.meta 01-15-2025 15:58:24.792 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:58:27.307 +0100 INFO TailReader [7248 tailreader0] - Batch input finished reading file='D:\Program Files\Splunk\var\spool\splunk\tracker.log' 01-15-2025 15:58:31.865 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:58:38.929 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:58:46.000 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:58:53.049 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:58:56.617 +0100 INFO TailReader [7248 tailreader0] - Batch input finished reading file='D:\Program Files\Splunk\var\spool\splunk\tracker.log' 01-15-2025 15:59:00.117 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:59:01.460 +0100 ERROR KVStorageProvider [5608 TcpChannelThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:59:01.460 +0100 ERROR KVStoreAdminHandler [5608 TcpChannelThread] - An error occurred.
Hi @Karthikeya , are you using INDEXED_EXTRACTIONS=json for your sourcetype? Ciao. Giuseppe
Hello, We have json data coming in Splunk and to extract that we have given | rex "(?<json>\{.*\})" | spath input=json Now my ask is I want this query to be run by default for one or more sourcet... See more...
Hello, We have json data coming in Splunk and to extract that we have given | rex "(?<json>\{.*\})" | spath input=json Now my ask is I want this query to be run by default for one or more sourcetypes, without everytime giving in search query. Do I need to do it while on boarding itself only? If yes please help me with step by step procedure. We don't have HF. We have deployment server, manager, and 3 indexers. DS will push apps to manager and from there manager will push apps to peers.