All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I had this issue today. I tried the few things suggested here. None fixed my problem. I looked in the _internal index for possible error messages. I found  "The certificate chain length (11) exceeds ... See more...
I had this issue today. I tried the few things suggested here. None fixed my problem. I looked in the _internal index for possible error messages. I found  "The certificate chain length (11) exceeds the maximum allowed length (10)" I changed the limit to 15 and my issue was fixed.  Under Configurations -> Setting  -> Query Server JVM Options  I entered -Djdk.tls.maxCertificateChainLength=15   --Hope this helps 
I believe you should have something like below, did you already try this? On Your parsing instance [my_sourcetype] SHOULD_LINEMERGE = false LINE_BREAKER = }([\n\r]*){ TRUNCATE = as needed On y... See more...
I believe you should have something like below, did you already try this? On Your parsing instance [my_sourcetype] SHOULD_LINEMERGE = false LINE_BREAKER = }([\n\r]*){ TRUNCATE = as needed On your Search Head [my_sourcetype] KV_MODE = json     Hope this helps.    
All them commands have some limitations. Some more explicitly configured, some implicitly derived from operating system limits. Anyway. Simply replacing eventstats with streamstats obviously won't ... See more...
All them commands have some limitations. Some more explicitly configured, some implicitly derived from operating system limits. Anyway. Simply replacing eventstats with streamstats obviously won't do. With streamstats you'd need some copying over values and I don't have a ready solution. I have a hunch it's possible but will be ugly and possibly even less efficient than eventstats based one.
We are having an issue where in order to see correct JSON syntax highlighting it requires setting "max lines" to "all lines". On a separate post the resolution was to turn off "pretty printing" so i... See more...
We are having an issue where in order to see correct JSON syntax highlighting it requires setting "max lines" to "all lines". On a separate post the resolution was to turn off "pretty printing" so instead of each event taking up multiple lines it is only takes up one. Which then allows Splunk to show the data in the correct JSON syntax highlighting. How do I turn this off?
Here is the sample log.    {"cluster_id":"cluster","kubernetes":{"host":"host","labels":{"app":"app","version":"v1"},"namespace_name":"namespace","pod_name":"pod},"log":{"App":"app_name","Env":"stg... See more...
Here is the sample log.    {"cluster_id":"cluster","kubernetes":{"host":"host","labels":{"app":"app","version":"v1"},"namespace_name":"namespace","pod_name":"pod},"log":{"App":"app_name","Env":"stg","LogType":"Application","contextMap":{},"endOfBatch":false,"level":"INFO","loggerFqcn":"org.apache.logging.log4j.spi.AbstractLogger","loggerName":"com.x.x.x.X","message":"Json path=/path feed=NAME sku=SKU_NAME status=failed errorCount=3 errors=ERROR_1, ERROR_2, MORE_ERROR_3 fields=Field 1, Field 2, More Fields Here","source":{"class":"com.x.x.x.X","file":"X.java","line":1,"method":"s"},"thread":"http-apr-8080-exec-4","threadId":1377,"threadPriority":5,"timeMillis":1727978156925},"time":"2024-10-03T17:55:56.925335046Z"}   Expected output from field message path feed sku status errorCount errors fields   /path Name SKU_NAME failed 3 ERROR_1, ERROR_2, MORE_ERROR_3 Field 1,Field 2,More Fields Here       If data within message field is ugly, I am willing to modify. But I assume, it will be treated as raw data and will not be treated as field @PickleRick   ---  This seems to work when  these regex are removed errors=(?P<errors>[^,]+) fields=(?P<fields>[^,]+) How do I fix errors and fields. Whereas when tested on  https://pythex.org/  it works index=item-interface "kubernetes.namespace_name"="namespace" "cluster_id":"*stage*" "Env":"stg" "loggerName":"com.x.x.x.X" "Json path=/validate feedType=" "log.level"=INFO | rename log.message as _raw | rex field=_raw "Json path=(?P<path>\/\w+) feedType=(?P<feedType>\w+) sku=(?P<sku>\w+) status=(?P<status>\w+) errorCount=(?P<errorCount>\w+) errors=(?P<errors>[^,]+) fields=(?P<fields>[^,]+)" | table path, feedType, sku, status, errorCount, errors, fields  
This solved my issue
I believe there is something wrong on your DMC set up. Please only enable Distributed Mode on the monitoring console instead of search head and indexers.  You can also try creating new health check ... See more...
I believe there is something wrong on your DMC set up. Please only enable Distributed Mode on the monitoring console instead of search head and indexers.  You can also try creating new health check item with some query and see if that works.  /en-US/app/splunk_monitoring_console/monitoringconsole_check_list I would encourage you to open a support case or on ondemand request.
1. We still have no idea what your raw data looks like. For example - how are we supposed to know whether the log.message path is the right one? I suppose it is because you're getting _some_ result b... See more...
1. We still have no idea what your raw data looks like. For example - how are we supposed to know whether the log.message path is the right one? I suppose it is because you're getting _some_ result but we have no way to know it. 2. Your initial search is very ineffective. Wildcards in the middle of search terms can give strange and inconsistent results and generally wildcards in a place other than the end of a search term slow your search. 3. You're getting some result but you're not showing us anything. How are we supposed to even understand what you're getting? 4. Don't get involved in this "177 events" number. It's just all events that have been matched by your initial search. 5. There are two main techniques of debugging searches - either you start from the start and add commands one by one until the results stop making sense or you start with the whole search and remove commands one by one until the results start making sense. 6. Honestly, I have no idea what you're trying to achieve with this mvzip/mvexpand/regex magic.
Hello, what to do when my kvstore folder just vanished? It do not create again after restart. Tried to create new folder with splunk privileges but it wont help. Do u have any idea how to reapir this... See more...
Hello, what to do when my kvstore folder just vanished? It do not create again after restart. Tried to create new folder with splunk privileges but it wont help. Do u have any idea how to reapir this? 
I have tried all the recommendations in this thread and non of them works. I upgraded from 9.0 to 9.3, but the clients are not phoning in.
I have the following setting in splunk_monitoring_console_assets.conf at location /etc/apps/splunk_monitoring_console/local/ [settings] disabled = 0 I have same setting on the DMC, SH and indexers... See more...
I have the following setting in splunk_monitoring_console_assets.conf at location /etc/apps/splunk_monitoring_console/local/ [settings] disabled = 0 I have same setting on the DMC, SH and indexers. everything works except DMC. I have the following roles for the DMC. Should any other roles to be enabled.  
First Lets find the transforms.conf by running the below btool. opt/splunk/bin/splunk btool transforms list --debug | grep sourcetype_1 Then you can try something like this on your transfor... See more...
First Lets find the transforms.conf by running the below btool. opt/splunk/bin/splunk btool transforms list --debug | grep sourcetype_1 Then you can try something like this on your transforms.conf from the above the app? splunk@idx1:/opt/splunk/bin$ /opt/splunk/bin/splunk btool validate-regex /opt/splunk/etc/apps/learned/local/transforms.conf --debug Bad regex value: '-zA-Z0-9_\.]+)=\"?([a-zA-Z0-9_\.:-]+)', of param: transforms.conf / [metrics_field_extraction] / REGEX; why: unmatched closing parenthesis      
I have the following setting in splunk_monitoring_console_assets.conf at location /etc/apps/splunk_monitoring_console/local/ [settings] disabled = 0 I have same setting on the DMC, SH and indexers... See more...
I have the following setting in splunk_monitoring_console_assets.conf at location /etc/apps/splunk_monitoring_console/local/ [settings] disabled = 0 I have same setting on the DMC, SH and indexers. everything works except DMC. I have the following roles for the DMC. Should any other roles to be enabled.  
@sainag_splunkThe command doesn't return anything.  Is there supposed to be an index or sourcetype in the command?
Hello, I'm trying to achieve a result set which can be used in an alert later on. Basically when search is executed, its should look for field named "state" and evaluate with its value from two h... See more...
Hello, I'm trying to achieve a result set which can be used in an alert later on. Basically when search is executed, its should look for field named "state" and evaluate with its value from two hours ago for the same corresponding record, which is field name "pv_number" and if the value of field did not change between "now" and "two hours ago", capture it as table showing previous state and current state along with previous time and current time. Any help is greatly appreciated. Thanks much! 
Hello! There could be a regex defined on that sourcetype. Please run a btool on the backend for that sourcetype and figure out if you find any spaces or typos in that regex, then try to remove them. ... See more...
Hello! There could be a regex defined on that sourcetype. Please run a btool on the backend for that sourcetype and figure out if you find any spaces or typos in that regex, then try to remove them. /opt/splunk/bin/splunk btool validate-regex --debug I would check out the search.log instead on whats happening there. Hope this helps.
I don't see an issue on my lab with the same version.  The only reason I could think of is make sure there are no "unconfigured instances" on your monitoring console. Make sure you set this and ... See more...
I don't see an issue on my lab with the same version.  The only reason I could think of is make sure there are no "unconfigured instances" on your monitoring console. Make sure you set this and apply changes as per this doc. https://docs.splunk.com/Documentation/Splunk/9.3.1/DMC/Configureindistributedmode#Reset_server_roles_after_restart Hope this helps and resolves.
Thank you for sharing your detailed process and the issue you're encountering with JSON log ingestion. Your testing approach was thorough, but there are a few key points to address: Props.conf loca... See more...
Thank you for sharing your detailed process and the issue you're encountering with JSON log ingestion. Your testing approach was thorough, but there are a few key points to address: Props.conf location: The primary parsing settings should be on the indexers, not the search heads. For JSON data, you typically only need minimal settings on the search head. Search head settings: On the search head, you can simplify your props.conf to just: [yoursourcetype] KV_MODE = JSON This tells Splunk to parse the JSON at search time, which should give you the field extractions you're looking for. In order to onboard this properly you can also set MAGIC6 props on your indexers. https://community.splunk.com/t5/Getting-Data-In/props-conf/m-p/426134   Try to run the below search to figure out what app is taking precedence.   | rest splunk_server=local /services/configs/conf-props/YOURSOURCEYPE| transpose | search column=eai:acl.app   Please UpVote/Solved if this helps.
Hi @Henry.Tellez, Did you see the reply above from @Tiong.Koh? If that reply helped, can you please click the "Accept as Solution" button? This helps let the community know this question was answer... See more...
Hi @Henry.Tellez, Did you see the reply above from @Tiong.Koh? If that reply helped, can you please click the "Accept as Solution" button? This helps let the community know this question was answered. If it didn't help, reply back and continue the conversation. 
I'm looking for application availability and not server uptime.