All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Unfortunately, at this moment Splunk can only do automatic structured data extraction if the whole event is well-formed structured data. So if your whole event is a json blob - Splunk can interpret i... See more...
Unfortunately, at this moment Splunk can only do automatic structured data extraction if the whole event is well-formed structured data. So if your whole event is a json blob - Splunk can interpret it automatically. If it isn't because it contains some header or footer, it's a no-go. There is an open idea about this on ideas.splunk.com - https://ideas.splunk.com/ideas/EID-I-208 Feel free to upvote it. For now all you can do is to trim your original event to contain only the json part. (But then you might lose some data, I know).
Description: Hello, I am experiencing an issue with the "event_id" field when transferring notable events from Splunk Enterprise Security (ES) to Splunk SOAR. Details: When sending the event to ... See more...
Description: Hello, I am experiencing an issue with the "event_id" field when transferring notable events from Splunk Enterprise Security (ES) to Splunk SOAR. Details: When sending the event to SOAR using an Adaptive Response Action (Send to SOAR), the event is sent successfully, but the "event_id" field does not appear in the data received in SOAR. Any assistance or guidance to resolve this issue would be greatly appreciated. Thank you
Yes, this is the way. Thanks @ITWhisperer  this is exactly what I was looking for.
Hi @pumphreyaw , @mattymo  Now I am stuck in same problem. We don't have HF actually. We have deployment server which push apps to our manager and deployer. From there manager will push apps to peer... See more...
Hi @pumphreyaw , @mattymo  Now I am stuck in same problem. We don't have HF actually. We have deployment server which push apps to our manager and deployer. From there manager will push apps to peers nodes. We have 3 search heads and a deployer.  Where I need to give these configurations to extract json data? Can you please help me step by step?
It shouldn't hurt. If you escape something that doesn't need escaping, nothing bad should happen. It's just ugly.
Hello @dbray_sd  Have you tried by cloning older input and creating new one ? Sometimes checkpoint fails during upgrade but cloning new input will create checkpoint and possibly resolves your issue.
There can be multiple reason behind streamfwd.exe not running, you should file support case to get this fixed.
Hi, Not sure if you've tried this but looks like a similar issue in a lower version upgrade. https://community.splunk.com/t5/Installation/After-upgrading-from-Splunk-6-2-3-to-6-3-0-why-am-I-getti... See more...
Hi, Not sure if you've tried this but looks like a similar issue in a lower version upgrade. https://community.splunk.com/t5/Installation/After-upgrading-from-Splunk-6-2-3-to-6-3-0-why-am-I-getting/m-p/252282 Cheers Meaf
No where I need to specify this? What this query will do? Please explain
Hi, We are running a Splunk Enterprise HWF with a generic s3 input to fetch object from a s3 bucket, however each time we try to move this input onto a new identical HWF we have issues getting the... See more...
Hi, We are running a Splunk Enterprise HWF with a generic s3 input to fetch object from a s3 bucket, however each time we try to move this input onto a new identical HWF we have issues getting the same data from the same bucket. Both instances are on Splunk 9.2 however the Splunk AWS TA versions are different. Both are pipeline managed so have all the same config / certs. The only difference we can see if that in the aws ta input log the 'broken' input never creates the S3 Connection before fetching the s3 objects and seems to think the bucket is empty. Working input 2025-01-15 10:25:09,124 level=INFO pid=5806 tid=Thread-6747 logger=splunk_ta_aws.common.aws_credentials pos=aws_credentials.py:load:162 | bucket_name="bucketname" datainput="input", start_time=1736918987 job_uid="8888", phase="fetch_key" | message="load credentials succeed" arn="AWSARN" expiration="2025-01-15 11:25:09+00:00" 2025-01-15 10:25:09,125 level=INFO pid=5806 tid=Thread-6747 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_get_bucket:364 | bucket_name="bucketname" datainput="input", start_time=1736918987 job_uid="8888", phase="fetch_key" | message="Create new S3 connection." 2025-01-15 10:25:09,130 level=INFO pid=5806 tid=Thread-6841 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=s3_key_processer.py:_do_index:148 | bucket_name="bucketname" datainput="input" last_modified="2025-01-15T04:00:41.000Z" phase="fetch_key" job_uid="8888" start_time=1736918987 key_name="bucketobject" | message="Indexed S3 files." size=819200 action="index" Broken input 2025-01-15 12:00:33,369 level=INFO pid=3157753 tid=Thread-4 logger=splunk_ta_aws.common.aws_credentials pos=aws_credentials.py:load:217 | datainput="input" bucket_name="bucketname", start_time=1736942432 job_uid="8888", phase="fetch_key" | message="load credentials succeed" arn="AWSARN" expiration="2025-01-15 13:00:33+00:00" 2025-01-15 12:00:33,373 level=INFO pid=3157753 tid=Thread-4 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_fetch_keys:378 | datainput="input" bucket_name="bucketname", start_time=1736942432 job_uid="88888", phase="fetch_key" | message="End of fetching S3 objects." pending_key_total=0 Unsure, where to go from here as we have tried this on multiple new machines.  Thanks Meaf
Yes, I noticed the excess escaping too, but if it's incorrect then the SC4S suggested config is wrong too. Either way, I will try it both ways, and get back to you. Thank you so much for your help!
Hi, We have the same issue here. Upgraded from Splunk Ent. v9.3.2 to V9.40 , running Windows 2019 server. The Kvstore process not running also effect on Splunk Secure Gateway (SSG/Splunk Mobile), Da... See more...
Hi, We have the same issue here. Upgraded from Splunk Ent. v9.3.2 to V9.40 , running Windows 2019 server. The Kvstore process not running also effect on Splunk Secure Gateway (SSG/Splunk Mobile), Dastboard Studio, (and i think Edge Hub etc). Yes, looked in mongod.log and splunkd.log but not a bit wiser! See below some lines in my mongod.log : targetMinOS: Windows 7/Windows Server 2008 R2 - ??? this build only supports versions up to 4, and the file is version 5: - ??   2025-01-15T14:45:22.046Z I CONTROL [initandlisten] MongoDB starting : pid=2224 port=8191 dbpath=D:\Program Files\Splunk\var\lib\splunk\kvstore\mongo 64-bit host=Gozer2 2025-01-15T14:45:22.046Z I CONTROL [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2 2025-01-15T14:45:22.046Z I CONTROL [initandlisten] db version v4.2.24 2025-01-15T14:45:22.046Z I CONTROL [initandlisten] git version: 5e4ec1d24431fcdd28b579a024c5c801b8cde4e2 2025-01-15T14:45:22.046Z I CONTROL [initandlisten] allocator: tcmalloc 2025-01-15T14:45:22.046Z I CONTROL [initandlisten] modules: enterprise 2025-01-15T14:45:22.046Z I CONTROL [initandlisten] build environment: 2025-01-15T14:45:22.046Z I CONTROL [initandlisten] distmod: windows-64 2025-01-15T14:45:22.046Z I CONTROL [initandlisten] distarch: x86_64 2025-01-15T14:45:22.046Z I CONTROL [initandlisten] target_arch: x86_64 2025-01-15T14:45:22.047Z I CONTROL [initandlisten] options: { net: { bindIp: "0.0.0.0", port: 8191, tls: { CAFile: "D:\Program Files\Splunk\etc\auth\cacert.pem", allowConnectionsWithoutCertificates: true, allowInvalidCertificates: true, allowInvalidHostnames: true, certificateSelector: "subject=SplunkServerDefaultCert", disabledProtocols: "noTLS1_0,noTLS1_1", mode: "requireTLS", tlsCipherConfig: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RS..." } }, replication: { oplogSizeMB: 200, replSet: "102D93C2-E5B9-4347-88CA-59FB829D92E1" }, security: { javascriptEnabled: false, keyFile: "D:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\splunk.key" }, setParameter: { enableLocalhostAuthBypass: "0", oplogFetcherSteadyStateMaxFetcherRestarts: "0" }, storage: { dbPath: "D:\Program Files\Splunk\var\lib\splunk\kvstore\mongo", engine: "wiredTiger", wiredTiger: { engineConfig: { cacheSizeGB: 4.65 } } }, systemLog: { timeStampFormat: "iso8601-utc" } } 2025-01-15T14:45:22.048Z W NETWORK [initandlisten] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. 2025-01-15T14:45:22.048Z W NETWORK [initandlisten] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. 2025-01-15T14:45:22.049Z I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=4761M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress], 2025-01-15T14:45:22.083Z E STORAGE [initandlisten] WiredTiger error (-31802) [1736952322:82769][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1736952322:82769][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-15T14:45:22.100Z E STORAGE [initandlisten] WiredTiger error (-31802) [1736952322:100690][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1736952322:100690][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-15T14:45:22.116Z E STORAGE [initandlisten] WiredTiger error (-31802) [1736952322:115624][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1736952322:115624][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-15T14:45:22.150Z E STORAGE [initandlisten] WiredTiger error (-31802) [1736952322:149476][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1736952322:149476][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-15T14:45:22.175Z E STORAGE [initandlisten] WiredTiger error (-31802) [1736952322:175362][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1736952322:175362][2224:140709387064240], connection: __log_open_verify, 925: unsupported WiredTiger file version: this build only supports versions up to 4, and the file is version 5: WT_ERROR: non-specific WiredTiger error 2025-01-15T14:45:22.179Z W STORAGE [initandlisten] Failed to start up WiredTiger under any compatibility version. 2025-01-15T14:45:22.179Z F STORAGE [initandlisten] Reason: -31802: WT_ERROR: non-specific WiredTiger error 2025-01-15T14:45:22.179Z F - [initandlisten] Fatal Assertion 28595 at src\mongo\db\storage\wiredtiger\wiredtiger_kv_engine.cpp 928 2025-01-15T14:45:22.179Z F - [initandlisten] \n\n***aborting after fassert() failure\n\n   Some lines from my Slunkd.log: 01-15-2025 15:57:57.139 +0100 INFO TailReader [7248 tailreader0] - Batch input finished reading file='D:\Program Files\Splunk\var\spool\splunk\tracker.log' 01-15-2025 15:57:57.467 +0100 ERROR KVStorageProvider [5552 TcpChannelThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:57:57.467 +0100 ERROR KVStoreAdminHandler [5552 TcpChannelThread] - An error occurred. 01-15-2025 15:58:03.592 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:58:10.645 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:58:17.723 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:58:24.745 +0100 WARN ExecProcessor [10156 ExecProcessor] - message from ""D:\Program Files\Splunk\bin\splunk-regmon.exe"" BundlesUtil - D:\Program Files\Splunk\etc\system\metadata\local.meta already exists but with different casing: D:\Program Files\splunk\etc\system\metadata\local.meta 01-15-2025 15:58:24.792 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:58:27.307 +0100 INFO TailReader [7248 tailreader0] - Batch input finished reading file='D:\Program Files\Splunk\var\spool\splunk\tracker.log' 01-15-2025 15:58:31.865 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:58:38.929 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:58:46.000 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:58:53.049 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:58:56.617 +0100 INFO TailReader [7248 tailreader0] - Batch input finished reading file='D:\Program Files\Splunk\var\spool\splunk\tracker.log' 01-15-2025 15:59:00.117 +0100 ERROR KVStorageProvider [924 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:59:01.460 +0100 ERROR KVStorageProvider [5608 TcpChannelThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'gozer2:8191'] 01-15-2025 15:59:01.460 +0100 ERROR KVStoreAdminHandler [5608 TcpChannelThread] - An error occurred.
Hi @Karthikeya , are you using INDEXED_EXTRACTIONS=json for your sourcetype? Ciao. Giuseppe
Hello, We have json data coming in Splunk and to extract that we have given | rex "(?<json>\{.*\})" | spath input=json Now my ask is I want this query to be run by default for one or more sourcet... See more...
Hello, We have json data coming in Splunk and to extract that we have given | rex "(?<json>\{.*\})" | spath input=json Now my ask is I want this query to be run by default for one or more sourcetypes, without everytime giving in search query. Do I need to do it while on boarding itself only? If yes please help me with step by step procedure. We don't have HF. We have deployment server, manager, and 3 indexers. DS will push apps to manager and from there manager will push apps to peers.
@rohithvr19  Your script won't work on my machine so I have created a sample script which returns simple "Hello world" text on click of dashboard button. You just create a similar configuration and ... See more...
@rohithvr19  Your script won't work on my machine so I have created a sample script which returns simple "Hello world" text on click of dashboard button. You just create a similar configuration and Python file as per your requirement.  Below is the code and file/folder structure.     hello_world.py import splunk, sys from json import dumps class HelloWorld(splunk.rest.BaseRestHandler): ''' Class for service custom endpoint. ''' def handle_POST(self): ''' This endpoint handler :return: None ''' payload = { "text": "Hello world!" } response = dumps({"data": payload, "status": "OK", "error": "None"}) self.response.setHeader('content-type', 'application/json') self.response.write(response) #handle verbs, otherwise Splunk will throw an error handle_GET = handle_POST   restmap.conf [script:my_custom_endpoint] match = /my_custom_endpoint handler = hello_world.HelloWorld   web.conf [expose:my_custom_endpoint] pattern = my_custom_endpoint methods = GET, POST   XML <dashboard script="fetch_data.js" version="1.1"> <label>My Dashboard</label> <description>Dynamic Result Example</description> <row> <panel> <html> <div> <button id="fetch-data-button">Fetch Data</button> <div id="div_result" style="margin-top: 10px; border: 1px solid #ccc; padding: 10px;">Result will be displayed here.</div> </div> </html> </panel> </row> </dashboard>   fetch_data.js require([ 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function($, mvc) { $('#fetch-data-button').on('click', function() { var service = mvc.createService(); service.post('/services/my_custom_endpoint', {}, function (err, response) { console.log(response); console.log(response.data); console.log(response.data.data); console.log(response.data.data.text); $('#div_result').html(response.data.data.text); }); return false; }); });   Screenshot       Try this code to learn and understand the custom endpoint and develop a new endpoint as per your needs.   I hope this will help you. Thanks KV An upvote would be appreciated if any of my replies help you solve the problem or gain knowledge.    
Hi @michael_vi , a ServerClass is a relation table between a list of hosts and a list of apps to be deployed to the hosts, so you can move apps between ServerClasses without any problem, putting obv... See more...
Hi @michael_vi , a ServerClass is a relation table between a list of hosts and a list of apps to be deployed to the hosts, so you can move apps between ServerClasses without any problem, putting obviously attention to cover all the hosts. Ciao. Giuseppe
Hi @Richy_s , as I said (and I say this aligned with my second role in my Company: privacy and ISO27001 Lead Auditor!), the only way to mask PII is to analyze your new data stored in a temporary ind... See more...
Hi @Richy_s , as I said (and I say this aligned with my second role in my Company: privacy and ISO27001 Lead Auditor!), the only way to mask PII is to analyze your new data stored in a temporary index, finding a list of controls. Then you can implement these rules in props and transforms, as described in the below link. Then you can prepare an alert, to run e.g. once a day, with the same controls on all the data archived in the day. If the alert will find something, it means that you have to extend your checks to other data. It isn't possible to run these controls before indexing because Splunk searches run on indexed data, the only other solution could be: index all data in temporary indexes, not accessibel to users, execute checks, mask eventual found data, copy all the data in the final indexes accessible to users. The only issue is that, in this way, you duplicate the license consuption! Ciao. Giuseppe
You could try something like this [metadata_subsecond] SOURCE_KEY = _meta REGEX = \_subsecond\:\:(\.\d+) FORMAT = $1 $0 DEST_KEY = subsecond_temp [metadata_fix_subsecond] INGEST_EVAL = _raw=if(i... See more...
You could try something like this [metadata_subsecond] SOURCE_KEY = _meta REGEX = \_subsecond\:\:(\.\d+) FORMAT = $1 $0 DEST_KEY = subsecond_temp [metadata_fix_subsecond] INGEST_EVAL = _raw=if(isnull(subsecond_temp),_raw,subsecond_temp." "._raw)  Of course you need to add the metadata_fix_subsecond transform into TRANSFORMS-zza-syslog before metadata_subsecond The number of backslashes in that subsecond regex is surprisingly high. Those characters shouldn't normally need escaping.
Splunk's version of arrays is multivalue field, so if you change you input to a multivalue field, you could do something like this | eval Tag = split(lower("Tag3,Tag4"),",") | spath | foreach *Tags{... See more...
Splunk's version of arrays is multivalue field, so if you change you input to a multivalue field, you could do something like this | eval Tag = split(lower("Tag3,Tag4"),",") | spath | foreach *Tags{} [| eval field="<<FIELD>>" | foreach <<FIELD>> mode=multivalue [| eval tags=if(isnull(tags),if(mvfind(Tag,lower('<<ITEM>>')) >= 0, field, null()),mvappend(tags, if(mvfind(Tag,lower('<<ITEM>>')) >= 0, field, null())))] ] | stats values(tags)
OK. Let me rephrase it. This is a typical attempt to "fix" policy issues with technical means. Without _knowing_ where the PII is you're doomed to guess. And guessing is never accurate. BTDTGTT