All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@jiaminyun  To configure Splunk to limit an index to 80% of its maximum size and prevent further data from being written. Define the maximum size for your index in the indexes.conf file using the ma... See more...
@jiaminyun  To configure Splunk to limit an index to 80% of its maximum size and prevent further data from being written. Define the maximum size for your index in the indexes.conf file using the maxTotalDataSizeMB attribute. For example, if you want the maximum size to be 100 GB [your_index_name] maxTotalDataSizeMB = 102400 To enforce the 80% limit, you can use the maxVolumeDataSizeMB attribute within a volume configuration. This attribute specifies the maximum size for the volume, and you can set it to 80% of the total size. For example, if the total size is 100 GB, set the volume size to 80 GB  [volume:your_volume_name] path = /path/to/your/volume maxVolumeDataSizeMB = 81920 Configure maximum index size - Splunk Documentation 
Oh I see, sorry. In that case you could do:   [severity] REGEX = "level":\s\"(Informational) FORMAT = severity::INFO WRITE_META = true   This means it will only set the severity field (to INFO) ... See more...
Oh I see, sorry. In that case you could do:   [severity] REGEX = "level":\s\"(Informational) FORMAT = severity::INFO WRITE_META = true   This means it will only set the severity field (to INFO) when level=Informational - Is this what you want, or should it be other values if not Informational? Is there a particular reason you are looking to make this index-time instead of a search-time change?  
Excellent, This worked. Thanks a lot for your efforts.
To be honest, this log looks like a mess - it appears to be trying to use JSON but is failing - if you have any influence over the app developers, try to get them to fix their log format. Failing th... See more...
To be honest, this log looks like a mess - it appears to be trying to use JSON but is failing - if you have any influence over the app developers, try to get them to fix their log format. Failing that, you could try something really messy like this | rex "MESSAGE=\"Rooms successfully updated for building - IL01: (?<msg>.+?)(?<!\\\\)\"" | eval msg="{\"il01\":\"".msg."\"}" | spath input=msg | spath input=il01 {} output=array | mvexpand array | spath input=array | table name id This assumes that "Rooms successfully updated for building - IL01" is a static string, if it isn't, you might need to replace some or all of this with the corresponding regular expression.
Here with a single webook: Here with no webhooks defined   Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
@kiran_panchavat Thanks, this helps 
2/10/25 11:00:18.000 AM { [-]    adf: true    avg_ingress_latency_fe: 0    client_dest_port: 443    client_ip: 128.12.73.92    client_rtt: 2    client_src_port: 23575    conn_est_time_... See more...
2/10/25 11:00:18.000 AM { [-]    adf: true    avg_ingress_latency_fe: 0    client_dest_port: 443    client_ip: 128.12.73.92    client_rtt: 2    client_src_port: 23575    conn_est_time_fe: 1    log_id: 97378    max_ingress_latency_fe: 0    ocsp_status_resp_sent: true    report_timestamp: 2025-02-10T11:00:18.780490Z    request_state: AVI_HTTP_REQUEST_STATE_SSL_HANDSHAKING    service_engine: GB-DRN-AB-Tier2-se-vxeuz    significant: 0    significant_log: [ [+]    ]    source_ip: 128.12.73.92    tenant_name: admin    udf: false    vcpu_id: 0    virtualservice: virtualservice-e52d1117-b508-4a6d-9fb5-f03ca6319af7    vs_ip: 128.160.71.101    vs_name: v-wasphictst-wdc.hc.cloud.uk.fed-443   2/10/25 11:00:18.000 AM { [-]    adf: true    avg_ingress_latency_fe: 0    client_dest_port: 443    client_ip: 128.12.53.70    client_rtt: 1    client_src_port: 50068    conn_est_time_fe: 1    log_id: 97377    max_ingress_latency_fe: 0    ocsp_status_resp_sent: true    report_timestamp: 2025-02-10T11:00:18.779796Z    request_state: AVI_HTTP_REQUEST_STATE_SSL_HANDSHAKING    service_engine: GB-DRN-AB-Tier2-se-vxeuz    significant: 0    significant_log: [ [+]    ]    source_ip: 128.12.53.70    tenant_name: admin    udf: false    vcpu_id: 0    virtualservice: virtualservice-e52d1117-b508-4a6d-9fb5-f03ca6319af7    vs_ip: 128.160.71.101    vs_name: v-wasphictst-wdc.hc.cloud.uk.fed-443 }   Are these two duplicate events? We are receiving in the same way in our UFs as well.
can someone please provide an clear image of Splunk Webhook allowlist setting from Splunk Cloud Console.  I am using Splunk Cloud Trial version it seems this option is not available in Trial versi... See more...
can someone please provide an clear image of Splunk Webhook allowlist setting from Splunk Cloud Console.  I am using Splunk Cloud Trial version it seems this option is not available in Trial version.  #splunkcloud #Webhookallowlist
@L_Petch Have a look:- Solved: How to deploy self-signed certs to deployment clie... - Splunk Community
Hello, I want to deploy 3rd party SSL certs via an app using the deployment server as there are too many Splunk Forwarders to do this individually. This works however, as there is an SSL line with t... See more...
Hello, I want to deploy 3rd party SSL certs via an app using the deployment server as there are too many Splunk Forwarders to do this individually. This works however, as there is an SSL line with the default password in server.conf it reads this first and therefor won’t read the correct SSL password in the apps server.conf file stopping it from working. Is there a better way of doing this so that I don’t need to write a script to hash out the SSL section in server.conf?  
Looks like you have some elements which are only partically removed from the code. Elements are defined in one array and the layout is in another array. You'll need to remove the erroneous reference... See more...
Looks like you have some elements which are only partically removed from the code. Elements are defined in one array and the layout is in another array. You'll need to remove the erroneous references to continue. This error isn't about the refresh, but the dashboard code not being valid. Clone the dashboard, or make a back up Note down the element IDs Open code view Use Find (CTRL-F, or your browser/OS shortcut) and enter each ID Remove all instances of the ID, careful not to delete other bits of code Post again when you've cleared out the errrored elements.
Hi @dataisbeautiful  I'm not deleting any element in the code. I'm trying to update the refresh option for whole dashboard. Currently it is 2m by default (from clone or somewhere else). But i want to... See more...
Hi @dataisbeautiful  I'm not deleting any element in the code. I'm trying to update the refresh option for whole dashboard. Currently it is 2m by default (from clone or somewhere else). But i want to either remove or give some higher value for refreshing. And the sample error is pops up for all panel ids. i've attached screenshot.
Hi @gcusello I tried both using spath and rex commands. Both haven’t worked. Using spath I didn’t get any stats. With rex, it shown 1317 stats but in results it is completely empty or you can say it ... See more...
Hi @gcusello I tried both using spath and rex commands. Both haven’t worked. Using spath I didn’t get any stats. With rex, it shown 1317 stats but in results it is completely empty or you can say it created an empty table.  please give me a full conmand which you feel is gonna work. index and search string has been already provided.
it helped but how can ensure that it's create severity = INFO field only when level=Information.
Hi Will, Thank you for your comment, upon checking the search index you sent, the last chance index has the highest count and the main has generated only 1 run. I just want to know what could be t... See more...
Hi Will, Thank you for your comment, upon checking the search index you sent, the last chance index has the highest count and the main has generated only 1 run. I just want to know what could be the issue why does the events are being redirected to the last chance index even though I declared the index before creating the input, like is there an extra step I've been missing to configure or to enable that's why it is directing to the last chance index instead to the index I've created?
Hi @livehybrid    Thanks for the response   | tstats summariesonly=true values(All_Traffic.dest) as dest dc(All_Traffic.dest) as count from datamodel=Network_Traffic where All_Traffic.dest_port!=... See more...
Hi @livehybrid    Thanks for the response   | tstats summariesonly=true values(All_Traffic.dest) as dest dc(All_Traffic.dest) as count from datamodel=Network_Traffic where All_Traffic.dest_port!="443" All_Traffic.dest_port!="80" All_Traffic.src_ip!="*:*" All_Traffic.src_ip!="5.195.243.8" ```cpx PT IP``` by All_Traffic.src_ip All_Traffic.dest_port | rename All_Traffic.src_ip as src All_Traffic.dest_port as dest_port | search NOT [| inputlookup internalip] | where count>=20 | iplocation src |  | eval severity="high" this is how one of the usecase looks like and recent notables have the urgency as below     when i check index-notable for this alert severity is showing as high
Hi @RSS_STT  The issue here is the source_key which is incorrectly set, it should be set to _raw, although _raw is the default so you could just remove that line entirely. You also do not need to s... See more...
Hi @RSS_STT  The issue here is the source_key which is incorrectly set, it should be set to _raw, although _raw is the default so you could just remove that line entirely. You also do not need to specify the naming of the extraction in the REGEX and instead use $1, so your resulting transform will look like:   [severity] REGEX = "level":\s\"(\w+) FORMAT = severity::"$1" WRITE_META = true   Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
i want to create new index time field severity if raw json payload have level field value is Information. { "level": "Information", "ORIGIN_Severity_name": "CRITICAL", "ProductArea": "Application", ... See more...
i want to create new index time field severity if raw json payload have level field value is Information. { "level": "Information", "ORIGIN_Severity_name": "CRITICAL", "ProductArea": "Application", "ORIGIN_Product": "Infrastructure"} What's wrong in my transforms.conf configuration. Any help much appreciated. transforms.conf [severity] REGEX = "level":\s\"(?<severity>\w+) SOURCE_KEY = fields:level FORMAT = severity::"INFO" WRITE_META = true  
Thank you for your response. How do I configure to limit an index to 80% and prevent data from being written in