All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I’m currently encountering the following error message in `splunkd.log` when I enable the custom TA Add-on. I have a Python script that successfully tests the signed CSR, private key, and root ... See more...
Hi, I’m currently encountering the following error message in `splunkd.log` when I enable the custom TA Add-on. I have a Python script that successfully tests the signed CSR, private key, and root CA. It can establish a connection and retrieve logs as expected. However, when using the application created, I am seeing the error message. I’ve double-checked the values, and everything seems to be the same. In our testing environment, it works, but the only difference I noticed is that the root CA certificate is in .csr format. Should I convert it to .pem, as we did in the testing environment? -0700 ERROR ExecProcessor - message from "/data/splunk/bin/python3.7 /data/splunk/etc/apps/TA_case/bin/case.py" HTTPSConnectionPool(host='<HiddenForSensitivityPurpose>', port=443): Max retries exceeded with url: <HiddenForSensitivityPurpose>caseType=Service+Case&fromData=2025-02-06+17%3A23&endDate=2025-02-06+21%3A23 (Caused by SSLError(SSLCertverificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1106)')))  
@mattymo @gcusello @PickleRick can you all please answer this question - https://community.splunk.com/t5/Getting-Data-In/Duplicate-values-because-of-json-values/m-p/711126#M117476
Thank you very much. I followed your method and resolved the issue.
Thanks so much @livehybrid 
Hi @Raja1 Do you now get a different error? What do you see in Splunkd.log and mongod.log, or in the CLI output? Thanks
Hmm that is odd. So you are seeing  both Medium and High being created? Please can you double check that there isnt a search running with the same rule name that could be creating the Medium severit... See more...
Hmm that is odd. So you are seeing  both Medium and High being created? Please can you double check that there isnt a search running with the same rule name that could be creating the Medium severity alerts? In the past when I have cloned ESCU searches for example I have accidently left the original searches enabled and end up creating notables from them too!
Its worth double checking that it has successfully store the index name in your inputs.conf for your modular input, and also double check the code where the event is written in your Modular Input Pyt... See more...
Its worth double checking that it has successfully store the index name in your inputs.conf for your modular input, and also double check the code where the event is written in your Modular Input Python to make sure it is specifying the correct value. If you want to post your python code I'd be happy to have a look and see, but dont post anything proprietary to you. Thanks  
If it is just a single index that you want to be 80% of your available storage then set maxTotalDataSizeMB for that specific index in indexes.conf to 0.8*<AvailableSpaceInMB>. However...be aware t... See more...
If it is just a single index that you want to be 80% of your available storage then set maxTotalDataSizeMB for that specific index in indexes.conf to 0.8*<AvailableSpaceInMB>. However...be aware that if you have multiple indexes then that value is for a single index only, and even if you set it for each index then it means each index can use 80% of your storage. Instead you can configure a Volume for all of your indexes to be stored in. See the indexes.conf docs for more examples but as a brief overview: [volume:yourVolume] path = /mnt/big_disk2 maxVolumeDataSizeMB = 1000000 << This would be 0.8 * <SizeOfDiskInMB< # index definitions [idx1] homePath = volume:yourVolume/idx1 coldPath = volume:yourVolume/idx1 # thawedPath must be specified, and cannot use volume: syntax # choose a location convenient for reconstitition from archive goals # For many sites, this may never be used. thawedPath = $SPLUNK_DB/idx1/thaweddb It is important to remember to set the home/cold path to use your volume! You can specify multiple volumes for different indexes or hot/cold data depending on your storage configuration and requirements. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
I have the same issue
Hi @Tajuddin , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
@jiaminyun  To configure Splunk to limit an index to 80% of its maximum size and prevent further data from being written. Define the maximum size for your index in the indexes.conf file using the ma... See more...
@jiaminyun  To configure Splunk to limit an index to 80% of its maximum size and prevent further data from being written. Define the maximum size for your index in the indexes.conf file using the maxTotalDataSizeMB attribute. For example, if you want the maximum size to be 100 GB [your_index_name] maxTotalDataSizeMB = 102400 To enforce the 80% limit, you can use the maxVolumeDataSizeMB attribute within a volume configuration. This attribute specifies the maximum size for the volume, and you can set it to 80% of the total size. For example, if the total size is 100 GB, set the volume size to 80 GB  [volume:your_volume_name] path = /path/to/your/volume maxVolumeDataSizeMB = 81920 Configure maximum index size - Splunk Documentation 
Oh I see, sorry. In that case you could do:   [severity] REGEX = "level":\s\"(Informational) FORMAT = severity::INFO WRITE_META = true   This means it will only set the severity field (to INFO) ... See more...
Oh I see, sorry. In that case you could do:   [severity] REGEX = "level":\s\"(Informational) FORMAT = severity::INFO WRITE_META = true   This means it will only set the severity field (to INFO) when level=Informational - Is this what you want, or should it be other values if not Informational? Is there a particular reason you are looking to make this index-time instead of a search-time change?  
Excellent, This worked. Thanks a lot for your efforts.
To be honest, this log looks like a mess - it appears to be trying to use JSON but is failing - if you have any influence over the app developers, try to get them to fix their log format. Failing th... See more...
To be honest, this log looks like a mess - it appears to be trying to use JSON but is failing - if you have any influence over the app developers, try to get them to fix their log format. Failing that, you could try something really messy like this | rex "MESSAGE=\"Rooms successfully updated for building - IL01: (?<msg>.+?)(?<!\\\\)\"" | eval msg="{\"il01\":\"".msg."\"}" | spath input=msg | spath input=il01 {} output=array | mvexpand array | spath input=array | table name id This assumes that "Rooms successfully updated for building - IL01" is a static string, if it isn't, you might need to replace some or all of this with the corresponding regular expression.
Here with a single webook: Here with no webhooks defined   Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
@kiran_panchavat Thanks, this helps 
2/10/25 11:00:18.000 AM { [-]    adf: true    avg_ingress_latency_fe: 0    client_dest_port: 443    client_ip: 128.12.73.92    client_rtt: 2    client_src_port: 23575    conn_est_time_... See more...
2/10/25 11:00:18.000 AM { [-]    adf: true    avg_ingress_latency_fe: 0    client_dest_port: 443    client_ip: 128.12.73.92    client_rtt: 2    client_src_port: 23575    conn_est_time_fe: 1    log_id: 97378    max_ingress_latency_fe: 0    ocsp_status_resp_sent: true    report_timestamp: 2025-02-10T11:00:18.780490Z    request_state: AVI_HTTP_REQUEST_STATE_SSL_HANDSHAKING    service_engine: GB-DRN-AB-Tier2-se-vxeuz    significant: 0    significant_log: [ [+]    ]    source_ip: 128.12.73.92    tenant_name: admin    udf: false    vcpu_id: 0    virtualservice: virtualservice-e52d1117-b508-4a6d-9fb5-f03ca6319af7    vs_ip: 128.160.71.101    vs_name: v-wasphictst-wdc.hc.cloud.uk.fed-443   2/10/25 11:00:18.000 AM { [-]    adf: true    avg_ingress_latency_fe: 0    client_dest_port: 443    client_ip: 128.12.53.70    client_rtt: 1    client_src_port: 50068    conn_est_time_fe: 1    log_id: 97377    max_ingress_latency_fe: 0    ocsp_status_resp_sent: true    report_timestamp: 2025-02-10T11:00:18.779796Z    request_state: AVI_HTTP_REQUEST_STATE_SSL_HANDSHAKING    service_engine: GB-DRN-AB-Tier2-se-vxeuz    significant: 0    significant_log: [ [+]    ]    source_ip: 128.12.53.70    tenant_name: admin    udf: false    vcpu_id: 0    virtualservice: virtualservice-e52d1117-b508-4a6d-9fb5-f03ca6319af7    vs_ip: 128.160.71.101    vs_name: v-wasphictst-wdc.hc.cloud.uk.fed-443 }   Are these two duplicate events? We are receiving in the same way in our UFs as well.
can someone please provide an clear image of Splunk Webhook allowlist setting from Splunk Cloud Console.  I am using Splunk Cloud Trial version it seems this option is not available in Trial versi... See more...
can someone please provide an clear image of Splunk Webhook allowlist setting from Splunk Cloud Console.  I am using Splunk Cloud Trial version it seems this option is not available in Trial version.  #splunkcloud #Webhookallowlist
@L_Petch Have a look:- Solved: How to deploy self-signed certs to deployment clie... - Splunk Community
Hello, I want to deploy 3rd party SSL certs via an app using the deployment server as there are too many Splunk Forwarders to do this individually. This works however, as there is an SSL line with t... See more...
Hello, I want to deploy 3rd party SSL certs via an app using the deployment server as there are too many Splunk Forwarders to do this individually. This works however, as there is an SSL line with the default password in server.conf it reads this first and therefor won’t read the correct SSL password in the apps server.conf file stopping it from working. Is there a better way of doing this so that I don’t need to write a script to hash out the SSL section in server.conf?