All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Your token is exposed in your post. Please be sure to replace that token ASAP. As for troubleshooting, I noticed you have 2 different values for OTEL_EXPORTER_OTLP_ENDPOINT. Is your goal to point ... See more...
Your token is exposed in your post. Please be sure to replace that token ASAP. As for troubleshooting, I noticed you have 2 different values for OTEL_EXPORTER_OTLP_ENDPOINT. Is your goal to point your telemetry to a locally running instance of the OTel collector or do you want to bypass a collector and send directly to the Observability Cloud backend? 
It depends. In most cases I prefer SH side. But this depends on your json and your needs. Can you show your raw event on disk before indexing!
Hi This should show how it has done https://community.splunk.com/t5/Splunk-Search/Converting-d-for-eval-command/m-p/499983 r. Ismo
From which one? The Search Head or the Universal Forwarder.....because I can already tell you, neither fixes the issue. Removed from Search Head, issue remains. Put it back on the Search Head, remo... See more...
From which one? The Search Head or the Universal Forwarder.....because I can already tell you, neither fixes the issue. Removed from Search Head, issue remains. Put it back on the Search Head, removed from the Universal Forwarder, issue changes into it not parsing the data as JSON, so it loses all formatting, and the table is blank.
In which user you are running it and how (exactly) you did upgrade? Is this all in one or distributed environment? There was no error when you start it with 9.2.4?
You have twice INDEXED_EXTRACTIONS = json. Please remove another.
Starting with 9.3.2. It's running on Amazon Linux 2.
Was this issue wit 9.2.4 or after that when you are starting it wit 9.3.2? Which Linux os distro and version you have and are those same as earlier?
Hi all, I was upgrading Splunk Enterprise from 9.0.x to 9.2.4 and then 9.3.2. When I try to restart the Splunk Service I get the following: Failed to start Systemd service file for Splunk, generate... See more...
Hi all, I was upgrading Splunk Enterprise from 9.0.x to 9.2.4 and then 9.3.2. When I try to restart the Splunk Service I get the following: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'. Unit Splunkd.service entered failed state. Splunkd.service failed. Splunkd.service holdoff time over, scheduling restart. Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. start request repeated too quickly for Splunkd.service Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'. Unit Splunkd.service entered failed state. Splunkd.service failed.   I'll add from a Splunk standpoint I am a complete noob. I did some research on the upgrade process and followed the Splunk documentation.    TIA!
I have seen a lot of similar Questions/Solutions with this aggravating issue, none of which are working.   Trying to pull RabbitMQ API (JSON) data into Splunk. Bash script creates /opt/data/rabbitm... See more...
I have seen a lot of similar Questions/Solutions with this aggravating issue, none of which are working.   Trying to pull RabbitMQ API (JSON) data into Splunk. Bash script creates /opt/data/rabbitmq-queues.json curl -s -u test:test https://localhost:15671/api/queues | jq > /opt/data/rabbitmq-queues.json   Universal Forwarder has following props.conf on the RabbitMQ server: [rabbitmq:queues:json] AUTO_KV_JSON = false INDEXED_EXTRACTIONS = JSON KV_MODE = none  And the inputs.conf:  [batch:///opt/data/rabbitmq-queues.json] disabled = false index = rabbitmq sourcetype = rabbitmq:queues:json move_policy = sinkhole crcSalt = <SOURCE> initCrcLength = 1048576 We run the btool on the Universal Forwarder to verify the settings are getting applied correctly: sudo /opt/splunkforwarder/bin/splunk btool props list --debug "rabbitmq:queues:json" /opt/splunkforwarder/etc/apps/RabbitMQ_Settings/local/props.conf [rabbitmq:queues:json] /opt/splunkforwarder/etc/system/default/props.conf ADD_EXTRA_TIME_FIELDS = True /opt/splunkforwarder/etc/system/default/props.conf ANNOTATE_PUNCT = True /opt/splunkforwarder/etc/apps/RabbitMQ_Settings/local/props.conf AUTO_KV_JSON = false /opt/splunkforwarder/etc/system/default/props.conf BREAK_ONLY_BEFORE = /opt/splunkforwarder/etc/system/default/props.conf BREAK_ONLY_BEFORE_DATE = True /opt/splunkforwarder/etc/system/default/props.conf CHARSET = UTF-8 /opt/splunkforwarder/etc/system/default/props.conf DATETIME_CONFIG = /etc/datetime.xml /opt/splunkforwarder/etc/system/default/props.conf DEPTH_LIMIT = 1000 /opt/splunkforwarder/etc/system/default/props.conf DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = false /opt/splunkforwarder/etc/system/default/props.conf HEADER_MODE = /opt/splunkforwarder/etc/apps/RabbitMQ_Settings/local/props.conf INDEXED_EXTRACTIONS = JSON /opt/splunkforwarder/etc/apps/RabbitMQ_Settings/local/props.conf KV_MODE = none /opt/splunkforwarder/etc/system/default/props.conf LB_CHUNK_BREAKER_TRUNCATE = 2000000 /opt/splunkforwarder/etc/system/default/props.conf LEARN_MODEL = true /opt/splunkforwarder/etc/system/default/props.conf LEARN_SOURCETYPE = true /opt/splunkforwarder/etc/system/default/props.conf LINE_BREAKER_LOOKBEHIND = 100 /opt/splunkforwarder/etc/system/default/props.conf MATCH_LIMIT = 100000 /opt/splunkforwarder/etc/system/default/props.conf MAX_DAYS_AGO = 2000 /opt/splunkforwarder/etc/system/default/props.conf MAX_DAYS_HENCE = 2 /opt/splunkforwarder/etc/system/default/props.conf MAX_DIFF_SECS_AGO = 3600 /opt/splunkforwarder/etc/system/default/props.conf MAX_DIFF_SECS_HENCE = 604800 /opt/splunkforwarder/etc/system/default/props.conf MAX_EVENTS = 256 /opt/splunkforwarder/etc/system/default/props.conf MAX_TIMESTAMP_LOOKAHEAD = 128 /opt/splunkforwarder/etc/system/default/props.conf MUST_BREAK_AFTER = /opt/splunkforwarder/etc/system/default/props.conf MUST_NOT_BREAK_AFTER = /opt/splunkforwarder/etc/system/default/props.conf MUST_NOT_BREAK_BEFORE = /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION = indexing /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-all = full /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-inner = inner /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-outer = outer /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-raw = none /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-standard = standard /opt/splunkforwarder/etc/system/default/props.conf SHOULD_LINEMERGE = True /opt/splunkforwarder/etc/system/default/props.conf TRANSFORMS = /opt/splunkforwarder/etc/system/default/props.conf TRUNCATE = 10000 /opt/splunkforwarder/etc/system/default/props.conf detect_trailing_nulls = false /opt/splunkforwarder/etc/system/default/props.conf maxDist = 100 /opt/splunkforwarder/etc/system/default/props.conf priority = /opt/splunkforwarder/etc/system/default/props.conf sourcetype = /opt/splunkforwarder/etc/system/default/props.conf termFrequencyWeightedDist = false /opt/splunkforwarder/etc/system/default/props.conf unarchive_cmd_start_mode = shell    On the local Search Head we have the following props.conf: [rabbitmq:queues:json] KV_MODE = none INDEXED_EXTRACTIONS = json AUTO_KV_JSON = false  We run the btool on the Search Head to verify the settings are getting applied correctly: sudo -u splunk /opt/splunk/bin/splunk btool props list --debug "rabbitmq:queues:json" /opt/splunk/etc/apps/RabbitMQ_Settings/local/props.conf [rabbitmq:queues:json] /opt/splunk/etc/system/default/props.conf ADD_EXTRA_TIME_FIELDS = True /opt/splunk/etc/system/default/props.conf ANNOTATE_PUNCT = True /opt/splunk/etc/apps/RabbitMQ_Settings/local/props.conf AUTO_KV_JSON = false /opt/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE = /opt/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE_DATE = True /opt/splunk/etc/system/default/props.conf CHARSET = UTF-8 /opt/splunk/etc/system/default/props.conf DATETIME_CONFIG = /etc/datetime.xml /opt/splunk/etc/system/default/props.conf DEPTH_LIMIT = 1000 /opt/splunk/etc/system/default/props.conf DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = false /opt/splunk/etc/system/default/props.conf HEADER_MODE = /opt/splunk/etc/apps/RabbitMQ_Settings/local/props.conf INDEXED_EXTRACTIONS = json /opt/splunk/etc/apps/RabbitMQ_Settings/local/props.conf KV_MODE = none /opt/splunk/etc/system/default/props.conf LB_CHUNK_BREAKER_TRUNCATE = 2000000 /opt/splunk/etc/system/default/props.conf LEARN_MODEL = true /opt/splunk/etc/system/default/props.conf LEARN_SOURCETYPE = true /opt/splunk/etc/system/default/props.conf LINE_BREAKER_LOOKBEHIND = 100 /opt/splunk/etc/system/default/props.conf MATCH_LIMIT = 100000 /opt/splunk/etc/system/default/props.conf MAX_DAYS_AGO = 2000 /opt/splunk/etc/system/default/props.conf MAX_DAYS_HENCE = 2 /opt/splunk/etc/system/default/props.conf MAX_DIFF_SECS_AGO = 3600 /opt/splunk/etc/system/default/props.conf MAX_DIFF_SECS_HENCE = 604800 /opt/splunk/etc/system/default/props.conf MAX_EVENTS = 256 /opt/splunk/etc/system/default/props.conf MAX_TIMESTAMP_LOOKAHEAD = 128 /opt/splunk/etc/system/default/props.conf MUST_BREAK_AFTER = /opt/splunk/etc/system/default/props.conf MUST_NOT_BREAK_AFTER = /opt/splunk/etc/system/default/props.conf MUST_NOT_BREAK_BEFORE = /opt/splunk/etc/system/default/props.conf SEGMENTATION = indexing /opt/splunk/etc/system/default/props.conf SEGMENTATION-all = full /opt/splunk/etc/system/default/props.conf SEGMENTATION-inner = inner /opt/splunk/etc/system/default/props.conf SEGMENTATION-outer = outer /opt/splunk/etc/system/default/props.conf SEGMENTATION-raw = none /opt/splunk/etc/system/default/props.conf SEGMENTATION-standard = standard /opt/splunk/etc/system/default/props.conf SHOULD_LINEMERGE = True /opt/splunk/etc/system/default/props.conf TRANSFORMS = /opt/splunk/etc/system/default/props.conf TRUNCATE = 10000 /opt/splunk/etc/system/default/props.conf detect_trailing_nulls = false /opt/splunk/etc/system/default/props.conf maxDist = 100 /opt/splunk/etc/system/default/props.conf priority = /opt/splunk/etc/system/default/props.conf sourcetype = /opt/splunk/etc/system/default/props.conf termFrequencyWeightedDist = false /opt/splunk/etc/system/default/props.conf unarchive_cmd_start_mode = shell   However, even with all that in place, we're still seeing duplicate values when using tables: index="rabbitmq" | table _time messages state    
That's the best method I know of.  There is an app for Apple devices but it's just desktop sized viz so might as well stick with what you have now.
Thank you for your response. The answer is "yes" to both questions. I've tried mapping the role to Name, memberOf, and FriendlyName. It looks like the response uses "DN format," and the example in ... See more...
Thank you for your response. The answer is "yes" to both questions. I've tried mapping the role to Name, memberOf, and FriendlyName. It looks like the response uses "DN format," and the example in the docs is similar to the response I'm receiving. One difference I did notice from the doc, however, is the value it's returning. In the doc, it appears to be returning LDAP memberships: CN=Employee, OU=SAML Test, DC=qa, etc... Our back-end uses Grouper for authorization, and the value looks more like org:sections:managed:employee:saml-test:qa:etc... I wonder if that's confusing Splunk...? I'm grasping at this point.  
If you are not familiar with changing depth_limit check out this material. https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Limitsconf depth_limit = <integer> * Limits the amount of resourc... See more...
If you are not familiar with changing depth_limit check out this material. https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Limitsconf depth_limit = <integer> * Limits the amount of resources that are spent by PCRE when running patterns that will not match. * Use this to limit the depth of nested backtracking in an internal PCRE function, match(). If set too low, PCRE might fail to correctly match a pattern. * Default: 1000 Your match is 1500+ characters into your event.  I know you have sanitized it so you need to check your true data to get the right count.
Thank you, this does work wonderfully. I am still learning and not wanting to be spoon-fed.  Can you explain why you chose to json at the point you did and what is this portion of the piping doing? ... See more...
Thank you, this does work wonderfully. I am still learning and not wanting to be spoon-fed.  Can you explain why you chose to json at the point you did and what is this portion of the piping doing?  | chart sum(count) as count over _raw by DeviceCompliance  The rest of it makes complete sense when I work through it out loud.
You can do that using a shell script in the CLI.  Look for "XYZ" in $SPLUNK_HOME/etc/apps/*/*/savedsearches.conf, $SPLUNK_HOME/etc/system/local/savedsearches.conf, and $SPLUNK_HOME/etc/apps/*/*/data/... See more...
You can do that using a shell script in the CLI.  Look for "XYZ" in $SPLUNK_HOME/etc/apps/*/*/savedsearches.conf, $SPLUNK_HOME/etc/system/local/savedsearches.conf, and $SPLUNK_HOME/etc/apps/*/*/data/ui/views/*.
What is the best way to display dashboards/visualizations on multiple screens in a SOC? I am using RaspberryPi, firefox, and a tab cycle add-on right now.  There has to be a better way.  
Worked for me, thank you
Fix for me was to rename the expired server.pem certificate file name, restart Splunk which automatically created a new server.pem certificate, then splunk clean kvstore --local , then splunk migrate... See more...
Fix for me was to rename the expired server.pem certificate file name, restart Splunk which automatically created a new server.pem certificate, then splunk clean kvstore --local , then splunk migrate kvstore-storage-engine --target-engine wiredTiger
Thank you, it worked for me
This is not a particulary crucial question but it has been nagging me for a while. When applying changes to indexes.conf on the manager node I usually do a calidate -> show -> apply round. But I am ... See more...
This is not a particulary crucial question but it has been nagging me for a while. When applying changes to indexes.conf on the manager node I usually do a calidate -> show -> apply round. But I am somewhout confused about a detail. When you validate a bundle after making modifications you get an updated "last_validated_bundle" checksum. In my mind, when you then apply this bundle the now "last_validated_bundle" checksum should become the new "latest_bundle" and "active_bundle". But this is not the case.  $ splunk apply cluster-bundle Creating new bundle with checksum=<does_not_match_last_validated_bundle> Applying new bundle. The peers may restart depending on the configurations in applied bundle. Please run 'splunk show cluster-bundle-status' for checking the status of the applied bundle. OK After the new bundle has been applied active_bundle, latest_bundle and last_validated_bundle checksums all match again. But why is the checksum produced after bundle validation not matching the checksum for the applied bundle? Have a great weekend!