All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The latest 9.4.x has own issues with platformVersion etc. so please check 1st are there anything else which could affect your environment! Then select suitable version to update.
Is the | history command supposed to include details of scheduled searches as well? It's not clearly mentioned in the documentation, so I'm asking for clarification.
In slack there have been discussions that staring of 9.3.x some user preferences have moved into kvstore. Unfortunately that is not documented. I’m not sure if that affect also in this case, but I sug... See more...
In slack there have been discussions that staring of 9.3.x some user preferences have moved into kvstore. Unfortunately that is not documented. I’m not sure if that affect also in this case, but I suggest that you will create a support case for Splunk.
Hi,  Following up on the above discussion, has anyone else discovered that there are quite a few instances where the "incident_id" field is blank in the mc_notes lookup? The other fields (autor.use... See more...
Hi,  Following up on the above discussion, has anyone else discovered that there are quite a few instances where the "incident_id" field is blank in the mc_notes lookup? The other fields (autor.username, create_time and content) contain the correct information but there is nothing in incident_id. Makes it a bit difficult to match the note to the corresponding incident
Yup. The so-called asynchronous forwarding or asynchronous load balancing helps greatly in reducing imbalance in data distribution. Without it, when just using time-based LB, a HF sends to one indexe... See more...
Yup. The so-called asynchronous forwarding or asynchronous load balancing helps greatly in reducing imbalance in data distribution. Without it, when just using time-based LB, a HF sends to one indexer for a specified period of time, then switches to another, then to another. But at any given point in time it only sends to one output. (unless you're using multiple ingestion pipelines in which case you will have multiples of this setup). And, adding to those pipelines - as you're having a separate HF layer, you might want to try to increase your pipeline count if you have spare resources (mostly CPU) on your HFs. You need to adjust your loadbalancing parameters accordingly.
Hi @dmoberg  You could define multiple metrics as their own streams such as:   |sim flow resolution=5000 query="A = data('demo.trans.count', rollup='rate').publish(label='A');B = data('demo.tr... See more...
Hi @dmoberg  You could define multiple metrics as their own streams such as:   |sim flow resolution=5000 query="A = data('demo.trans.count', rollup='rate').publish(label='A');B = data('demo.trans.latency', rollup='rate').publish(label='B')" | chart latest(_value) over _time by sf_streamLabel  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
What is the problem you are trying to solve? Not the immediate technical "problem" - how to change sourcetype - but the business one. Why do you want to do that?
There is a very good writeup about this from Duane here - https://community.splunk.com/t5/Security/Encrypting-indexed-data-on-rest/m-p/40840/highlight/true#M1368 Think about what threats you want to... See more...
There is a very good writeup about this from Duane here - https://community.splunk.com/t5/Security/Encrypting-indexed-data-on-rest/m-p/40840/highlight/true#M1368 Think about what threats you want to secure yourself from and what access the attacker you're trying to protect from would already have.  If you want to do it just for the sake of compliance and checkbox security, just use an filesystem-level or device-level encryption. But that's nowhere near well-developed controls.
Hmm... Right. Which is surprising because null here is treated as a field name which simply turns out to be empty. If you assign a value to it, it will of course be used. And of course you can't dro... See more...
Hmm... Right. Which is surprising because null here is treated as a field name which simply turns out to be empty. If you assign a value to it, it will of course be used. And of course you can't drop mvappend because you want to cover all fields. So the way around is to not assign null() in this case but an empty string. Apparently it's ignored with multivalued fields.  
Hi @BradOH  Please could you check the output of btool, does this list the is_risky=false? $SPLUNK_HOME/bin/splunk cmd btool commands list --debug dbxlookup As @PickleRick  said - Make sure not to... See more...
Hi @BradOH  Please could you check the output of btool, does this list the is_risky=false? $SPLUNK_HOME/bin/splunk cmd btool commands list --debug dbxlookup As @PickleRick  said - Make sure not to modify the default/commands.conf in the app as this could get overwritten, although would have thought it would work if set in local/commands.conf - Do you have any specific errors you saw when you did this?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @PhoenixA  Unfortunately the schema for the tab bar does not allow any customisation, even things which are natively supported by the Tab Bar framework are not possible to manipulate currently us... See more...
Hi @PhoenixA  Unfortunately the schema for the tab bar does not allow any customisation, even things which are natively supported by the Tab Bar framework are not possible to manipulate currently using the dashboard studio JSON editor.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Poojitha  The key here is to end up with a field called "metric_name:<yourMetricName>" with a numeric value containing your metric value.  For example: metric_name:cpu_utilization=45.5 Here i... See more...
Hi @Poojitha  The key here is to end up with a field called "metric_name:<yourMetricName>" with a numeric value containing your metric value.  For example: metric_name:cpu_utilization=45.5 Here is an example SPL which might help, Ive used some sample data at the top to structure this for testing: | makeresults | eval _raw="{\"log.dmc\":{\"metricName\":\"cpu_utilization\",\"tenantId\":\"12345\",\"value\":75.3,\"timestamp\":\"2025-07-14 09:45:00.123\"}}" | eval _raw=json_extract_exact(_raw,"log.dmc") | spath ``` end of sample generation ``` | eval _time = strptime(timestamp, "%Y-%m-%d %H:%M:%S.%3N") | eval metric_value = tonumber(value) | eval metric_name:{metricName}=metric_value | table tenantId metric_name* |mcollect index=test_metrics    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
As @livehybrid said this seems to be bug. So create a support case to splunk for it.
Hi @pedropiin  Where do the values currently come from that you want to use within your eval?
You should use asynchronous forwarding to help this situation. Here is one instruction for it https://splunk.my.site.com/customer/s/article/Asynchronous-Forwarding-to-Splunk There are lot of other ar... See more...
You should use asynchronous forwarding to help this situation. Here is one instruction for it https://splunk.my.site.com/customer/s/article/Asynchronous-Forwarding-to-Splunk There are lot of other articles about it, which you could easily found e.g. by asking those from your favorite search engine.
Hi All,  I have a query that converts event logs to metrics  (search time processing) : | index=<indexname> sourcetype=<sourcetype> host=<hostame> | spath input=log.dmc  | eval metric_name = ... See more...
Hi All,  I have a query that converts event logs to metrics  (search time processing) : | index=<indexname> sourcetype=<sourcetype> host=<hostame> | spath input=log.dmc  | eval metric_name = 'log_processed.dmc.metricName'  | eval tenantId = 'log.dmc.tenantId'  | eval metric_value = tonumber('log_processed.dmc.value')  | eval _time = strptime('log_processed.timestamp', "%Y-%m-%d %H:%M:%S.%3N")  | fields _time, metric_name, tenantId, metric_value , | rename metric_value as metric_name::metric_value metric_name as metric | table metric "metric_name::metric_value" _time tenantId | mcollect index=test_metrics The test_metrics here is the index with metrics category. From the documentation , I understood the metric field should be displayed as below  on using metric_name::metric_value.  https://help.splunk.com/en/splunk-enterprise/get-data-in/metrics/9.4/introduction-to-metrics/get-started-with-metrics But with the query I am using , it is getting displayed as separate field with just numerical value (not in above  screenshot example format).  Also, metric_name field is getting displayed only after it is renamed. Please let me know what is that I am doing wrong.  Thanks, PNV    
Here is link which contains a tool to find and download old unsupported splunk versions https://community.splunk.com/t5/Splunk-Enterprise/Splunk-Migration-from-RHEL-to-AL2023/m-p/749226/highlight/true... See more...
Here is link which contains a tool to find and download old unsupported splunk versions https://community.splunk.com/t5/Splunk-Enterprise/Splunk-Migration-from-RHEL-to-AL2023/m-p/749226/highlight/true#M22568 It contains also some discussion which you should know when you are planning an update and do it.
Could you describe what is your issue which you are trying to solve? Not the action how you are solving it!
Hi livehybrid :    Ty , i try to change the default template for this  Template="RSYSLOG_SyslogProtocol23Format" and now it works !!!! Ty for thew help
Hi as other already said it isn’t currently possible. If you thing that this is really necessary then create entry into ideas.splunk.com. Of course you could encrypt file system level with os / clo... See more...
Hi as other already said it isn’t currently possible. If you thing that this is really necessary then create entry into ideas.splunk.com. Of course you could encrypt file system level with os / cloud tools if needed. Then you could create separate environment for those indexes. But you must have also separate SH for access those indexers where those indexes are. And remember that when SH have done those queries then data will be on their disks for some time before it will expire. For that time anyone who has command line access as splunk or root can see that data.  So you have lot of other things to consider than just add key to access those indexes! r. Ismo