All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I feel content accepting that bundle checksums are by design present to visualise the different states and not to detect modifications between validation and application. Thank you for the link thoug... See more...
I feel content accepting that bundle checksums are by design present to visualise the different states and not to detect modifications between validation and application. Thank you for the link though, I will definetly have a look at the presentation. With a little luck it might even be available online somewhere. All the best
While I still find this "counter intuitive" speaking of checksums, I understand that this is by design. The validated bundle is modified upon application and hence the checksum changes. This is not ... See more...
While I still find this "counter intuitive" speaking of checksums, I understand that this is by design. The validated bundle is modified upon application and hence the checksum changes. This is not to prevent pushing bundles that have been modified after validation but rather to keep track of states of changes/bundles. Thank you for the explanation
Every bucket has to store every dimension value once, so if you are using a million unique IDs to reference combinations of less than a million unique dimension strings, you are making the situation ... See more...
Every bucket has to store every dimension value once, so if you are using a million unique IDs to reference combinations of less than a million unique dimension strings, you are making the situation worse. Using KV Store is a great idea for repetitive asset information, like adding context to a hostname, but in this situation you should still store the meaningful unique identifier (hostname) as a dimension. I believe your best solution will be some combination of dimensions and KV Store to enrich them, but don't go 100% in either direction, and if you start creating new unique keys to make it work I think it's going too far. The only other suggestion I have is if you have large logic groups of systems without overlapping dimensions, you could put them into separate indexes and use wildcards in your index filter to access them all. Will keep the TSIDX smaller and performance higher.
Have you looked at Factory Talk, Fiix, or Plex? They are priced software packages and my company sells them. I actually am looking to develop a Splunk solution for PLC's as I am a long time Splunk de... See more...
Have you looked at Factory Talk, Fiix, or Plex? They are priced software packages and my company sells them. I actually am looking to develop a Splunk solution for PLC's as I am a long time Splunk developer. I also just realized this question is 3.5 years old, ha!
Hello, I just started using the new Dashboard Studio at work and I am having a few problems. For one, with classic dashboards I was able to share my input configuration for drop downs and such with s... See more...
Hello, I just started using the new Dashboard Studio at work and I am having a few problems. For one, with classic dashboards I was able to share my input configuration for drop downs and such with someone else by sharing the current url, however the url does not seem to contain the information this time, and every time I share or reload it reverts to defaults. What is the solution here to share?
@Aresndiz Checkout https://docs.splunk.com/Documentation/Splunk/9.3.2/Alert/ThrottleAlerts .You can control alert throttling behavior using these settings in your saved search:   alert.suppress = ... See more...
@Aresndiz Checkout https://docs.splunk.com/Documentation/Splunk/9.3.2/Alert/ThrottleAlerts .You can control alert throttling behavior using these settings in your saved search:   alert.suppress = 1 alert.suppress.period = <time-value> For your specific case of wanting to suppress similar results but trigger on changes, you can also use: alert.suppress.fields = <comma-separated-field-list> This tells Splunk to only suppress alerts if the specified fields contain the same values. If the values change, a new alert will trigger even if it's within the suppression period. This gives you the balance of avoiding duplicate notifications while still catching important changes. If this helps, Please Upvote.
https://community.splunk.com/t5/Getting-Data-In/Route-event-data-to-different-target-groups-issue/td-p/366461
Have you configured tcp input in HF or are you using separate syslog (like rsyslog or syslog-ng) to receiving those? If you are using HF then just add real syslog server and collect logs from local fi... See more...
Have you configured tcp input in HF or are you using separate syslog (like rsyslog or syslog-ng) to receiving those? If you are using HF then just add real syslog server and collect logs from local files.
Hi, I'm trying to add the source information of the metric (Like k8s pod name, k8s node name etc.,) from splunk-otel-collector-agent and then send it to gateway (Data Forwarding model). I tried usi... See more...
Hi, I'm trying to add the source information of the metric (Like k8s pod name, k8s node name etc.,) from splunk-otel-collector-agent and then send it to gateway (Data Forwarding model). I tried using attributes and resource processors to add the source info, then enabled those processors in the pipelines in the agent_config.yaml. In gateway_config,yaml, I added the processors with from_attribute to read from agent's attribute. But I couldn't add additional source tags of my metric. Can anyone help here? Let me know if you need more info. I can share. Thanks, Naren
Can you post your db input configuration? Add it into block </> (editor box).
@Brett have you any answers to this?
Have you look these https://community.splunk.com/t5/All-Apps-and-Add-ons/Limit-choices-in-default-TIMEPICKER https://hurricanelabs.com/splunk-tutorials/how-to-set-custom-time-range-presets-in-splu... See more...
Have you look these https://community.splunk.com/t5/All-Apps-and-Add-ons/Limit-choices-in-default-TIMEPICKER https://hurricanelabs.com/splunk-tutorials/how-to-set-custom-time-range-presets-in-splunk/
@fatsug Hi! ASFAIK, the last_validated_bundle is different from the active_bundle because they have different purposes within the bundle lifecycle. The validation process creates a temporary bundle t... See more...
@fatsug Hi! ASFAIK, the last_validated_bundle is different from the active_bundle because they have different purposes within the bundle lifecycle. The validation process creates a temporary bundle to verify the configuration changes, while the apply process would create a new bundle including additional metadata and runtime information needed for the actual deploy across the cluster. When you run splunk apply cluster-bundle, it creates a new bundle from the master-apps directory and that's why you're seeing a different checksum than your validation step. After successful application, all checksums align since they're now referring to the same deployed bundle.
You should always disable any unnecessary components like kvstore where it’s don’t needed. See https://docs.splunk.com/Documentation/Splunk/9.4.0/Security/DisableunnecessarySplunkcomponents
It could be like “host\s*=“. The best way is use btool with —debug to see where it has defined.
Hi I afraid that you must do this in HF or IDX with props and transforms? The reason for that is those file/directory names. I think that splunk internally expand those in one equal name and for that... See more...
Hi I afraid that you must do this in HF or IDX with props and transforms? The reason for that is those file/directory names. I think that splunk internally expand those in one equal name and for that reason you have only one monitoring stanza not three? Maybe you could try to put those in different order, put that wildcard only into latest. Use btool to look how splunk see those and in which order it put those. Another tool to check how those are read is use “splunk list inputstatus” that shows which files it has read and which stanza those belongs. r. Ismo
I didn't know you had to disable kvstore on indexers. It works, I was able to update my server. Thanks.
I just did this from the /opt/splunk directory on all 3 SHC members, and the deployer: grep --include=inputs.conf -rnw . -e "host =" The only place where I see the hostname being in an inputs.c... See more...
I just did this from the /opt/splunk directory on all 3 SHC members, and the deployer: grep --include=inputs.conf -rnw . -e "host =" The only place where I see the hostname being in an inputs.conf is in $SPLUNK_HOME/etc/system/local, and $SPLUNK_HOME/var/run/splunk/confsnapshot/baselinelocal/inputs.conf Kind of at a loss...
  Hi do you mind sharing the search string/spl you used to the the AD login information? Thank you!
Here https://conf.splunk.com/files/2017/slides/pushing-configuration-bundles-in-an-indexer-cluster.pdf is an old conf presentation about pushing cluster bundle. It contains quite much information and ... See more...
Here https://conf.splunk.com/files/2017/slides/pushing-configuration-bundles-in-an-indexer-cluster.pdf is an old conf presentation about pushing cluster bundle. It contains quite much information and starting page 41 is troubleshooting section. There are e.g. places where those bundles are stored on CM and peers. Basically you could look their content to see are those different which explains the difference of hash. r. Ismo