All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Can we try using the grpc endpoint (http://localhost:4317) instead of the http endpoint (http://localhost:4318)? It's possible that PHP uses grpc by default. 
Hi bishida, The token has been rotated. As a matter fact i tried to setup directly, but it was configured with otel collector before with the same result on env variables. Image below. The ide... See more...
Hi bishida, The token has been rotated. As a matter fact i tried to setup directly, but it was configured with otel collector before with the same result on env variables. Image below. The idea is to configure locally running on an otel collector instance. As mentioned before, even locally at otel collector or directly to o11y cloud is not working.
Hello Splunk Community, I am running Splunk Enterprise Version: 9.2.3 Steps to reproduce: Make a config change to an app on the Cluster Manager - $SPLUNK_HOME/etc/master-apps/<custom_app>/local/i... See more...
Hello Splunk Community, I am running Splunk Enterprise Version: 9.2.3 Steps to reproduce: Make a config change to an app on the Cluster Manager - $SPLUNK_HOME/etc/master-apps/<custom_app>/local/indexes.conf Validate and Check Restart from Cluster Manager GUI. Bundle Information: Updated Time shows a date/time from last month (did not update) The Active Bundle ID did not change Unable to make changes to apps and have them pushed to Indexers. Note: there are other issues All three of my clustered Indexers are in Automatic Detention Seeing these Messages on GUI: Search peer xxx has the following message: The minimum free disk space (1000MB) reached for /opt/splunk/var/run/splunk/dispatch. Search peer xxx has the following message: Now skipping indexing of internal audit events, because the downstream queue is not accepting data. Will keep dropping events until data flow resumes. Review system health: ensure downstream indexing and/or forwarding are operating correctly Ultimatley, I am trying to push changes to the setting frozenTimePeriodInSecs to reduce stored logs and free up space.  Thanks for your help
Hi All, I am trying to create summary index for Cisco ESA Textmail logs. I will then rebuild the Email data model using the summary index. The scheduled search is running correctly but when I try t... See more...
Hi All, I am trying to create summary index for Cisco ESA Textmail logs. I will then rebuild the Email data model using the summary index. The scheduled search is running correctly but when I try to search the summary index I get no events returned. How does one check that events are going into the summary index correctly? Steps Taken Created a new index called email_summary I have created a scheduled search to run every 15 minutes In the settings I have ticked 'Enable summary indexing' Saved Search   index=email sourcetype=cisco:esa:textmail | stats values(action) as action, values(dest) as dest, values(duration) as duration, values(file_name) as file_name, values(message_id) as message_id, values(recipient) as recipient, dc(recipient) as recipient_count, values(recipient_domain) as recipient_domain, values(src) as src, values(src_user) as src_user, values(src_user_domain) as src_user_domain, values(message_subject) as subject, values(tag) as tag, values(url) as url, values(user) AS user values(vendor_product) as vendor_product, values(vendor_action) as filter_action, values(reputation_score) as filter_score BY internal_message_id     Thanks, Dave
All-in-one environment. The user account on the machine I used was ec2-user. I assume, but am not sure if that was the user used to do the original install.  Honestly, I didn't try to start the inst... See more...
All-in-one environment. The user account on the machine I used was ec2-user. I assume, but am not sure if that was the user used to do the original install.  Honestly, I didn't try to start the instance after the 9.2.4 upgrade. I was on 9.0. Did the applied the 9.2.4 upgrade and then immediately applied the 9.3.2 upgrade.  I user the tgz upgrade file and followed the 9.3.2 upgrade documentation. Again, total noob here. 
Your token is exposed in your post. Please be sure to replace that token ASAP. As for troubleshooting, I noticed you have 2 different values for OTEL_EXPORTER_OTLP_ENDPOINT. Is your goal to point ... See more...
Your token is exposed in your post. Please be sure to replace that token ASAP. As for troubleshooting, I noticed you have 2 different values for OTEL_EXPORTER_OTLP_ENDPOINT. Is your goal to point your telemetry to a locally running instance of the OTel collector or do you want to bypass a collector and send directly to the Observability Cloud backend? 
It depends. In most cases I prefer SH side. But this depends on your json and your needs. Can you show your raw event on disk before indexing!
Hi This should show how it has done https://community.splunk.com/t5/Splunk-Search/Converting-d-for-eval-command/m-p/499983 r. Ismo
From which one? The Search Head or the Universal Forwarder.....because I can already tell you, neither fixes the issue. Removed from Search Head, issue remains. Put it back on the Search Head, remo... See more...
From which one? The Search Head or the Universal Forwarder.....because I can already tell you, neither fixes the issue. Removed from Search Head, issue remains. Put it back on the Search Head, removed from the Universal Forwarder, issue changes into it not parsing the data as JSON, so it loses all formatting, and the table is blank.
In which user you are running it and how (exactly) you did upgrade? Is this all in one or distributed environment? There was no error when you start it with 9.2.4?
You have twice INDEXED_EXTRACTIONS = json. Please remove another.
Starting with 9.3.2. It's running on Amazon Linux 2.
Was this issue wit 9.2.4 or after that when you are starting it wit 9.3.2? Which Linux os distro and version you have and are those same as earlier?
Hi all, I was upgrading Splunk Enterprise from 9.0.x to 9.2.4 and then 9.3.2. When I try to restart the Splunk Service I get the following: Failed to start Systemd service file for Splunk, generate... See more...
Hi all, I was upgrading Splunk Enterprise from 9.0.x to 9.2.4 and then 9.3.2. When I try to restart the Splunk Service I get the following: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'. Unit Splunkd.service entered failed state. Splunkd.service failed. Splunkd.service holdoff time over, scheduling restart. Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. start request repeated too quickly for Splunkd.service Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'. Unit Splunkd.service entered failed state. Splunkd.service failed.   I'll add from a Splunk standpoint I am a complete noob. I did some research on the upgrade process and followed the Splunk documentation.    TIA!
I have seen a lot of similar Questions/Solutions with this aggravating issue, none of which are working.   Trying to pull RabbitMQ API (JSON) data into Splunk. Bash script creates /opt/data/rabbitm... See more...
I have seen a lot of similar Questions/Solutions with this aggravating issue, none of which are working.   Trying to pull RabbitMQ API (JSON) data into Splunk. Bash script creates /opt/data/rabbitmq-queues.json curl -s -u test:test https://localhost:15671/api/queues | jq > /opt/data/rabbitmq-queues.json   Universal Forwarder has following props.conf on the RabbitMQ server: [rabbitmq:queues:json] AUTO_KV_JSON = false INDEXED_EXTRACTIONS = JSON KV_MODE = none  And the inputs.conf:  [batch:///opt/data/rabbitmq-queues.json] disabled = false index = rabbitmq sourcetype = rabbitmq:queues:json move_policy = sinkhole crcSalt = <SOURCE> initCrcLength = 1048576 We run the btool on the Universal Forwarder to verify the settings are getting applied correctly: sudo /opt/splunkforwarder/bin/splunk btool props list --debug "rabbitmq:queues:json" /opt/splunkforwarder/etc/apps/RabbitMQ_Settings/local/props.conf [rabbitmq:queues:json] /opt/splunkforwarder/etc/system/default/props.conf ADD_EXTRA_TIME_FIELDS = True /opt/splunkforwarder/etc/system/default/props.conf ANNOTATE_PUNCT = True /opt/splunkforwarder/etc/apps/RabbitMQ_Settings/local/props.conf AUTO_KV_JSON = false /opt/splunkforwarder/etc/system/default/props.conf BREAK_ONLY_BEFORE = /opt/splunkforwarder/etc/system/default/props.conf BREAK_ONLY_BEFORE_DATE = True /opt/splunkforwarder/etc/system/default/props.conf CHARSET = UTF-8 /opt/splunkforwarder/etc/system/default/props.conf DATETIME_CONFIG = /etc/datetime.xml /opt/splunkforwarder/etc/system/default/props.conf DEPTH_LIMIT = 1000 /opt/splunkforwarder/etc/system/default/props.conf DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = false /opt/splunkforwarder/etc/system/default/props.conf HEADER_MODE = /opt/splunkforwarder/etc/apps/RabbitMQ_Settings/local/props.conf INDEXED_EXTRACTIONS = JSON /opt/splunkforwarder/etc/apps/RabbitMQ_Settings/local/props.conf KV_MODE = none /opt/splunkforwarder/etc/system/default/props.conf LB_CHUNK_BREAKER_TRUNCATE = 2000000 /opt/splunkforwarder/etc/system/default/props.conf LEARN_MODEL = true /opt/splunkforwarder/etc/system/default/props.conf LEARN_SOURCETYPE = true /opt/splunkforwarder/etc/system/default/props.conf LINE_BREAKER_LOOKBEHIND = 100 /opt/splunkforwarder/etc/system/default/props.conf MATCH_LIMIT = 100000 /opt/splunkforwarder/etc/system/default/props.conf MAX_DAYS_AGO = 2000 /opt/splunkforwarder/etc/system/default/props.conf MAX_DAYS_HENCE = 2 /opt/splunkforwarder/etc/system/default/props.conf MAX_DIFF_SECS_AGO = 3600 /opt/splunkforwarder/etc/system/default/props.conf MAX_DIFF_SECS_HENCE = 604800 /opt/splunkforwarder/etc/system/default/props.conf MAX_EVENTS = 256 /opt/splunkforwarder/etc/system/default/props.conf MAX_TIMESTAMP_LOOKAHEAD = 128 /opt/splunkforwarder/etc/system/default/props.conf MUST_BREAK_AFTER = /opt/splunkforwarder/etc/system/default/props.conf MUST_NOT_BREAK_AFTER = /opt/splunkforwarder/etc/system/default/props.conf MUST_NOT_BREAK_BEFORE = /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION = indexing /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-all = full /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-inner = inner /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-outer = outer /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-raw = none /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-standard = standard /opt/splunkforwarder/etc/system/default/props.conf SHOULD_LINEMERGE = True /opt/splunkforwarder/etc/system/default/props.conf TRANSFORMS = /opt/splunkforwarder/etc/system/default/props.conf TRUNCATE = 10000 /opt/splunkforwarder/etc/system/default/props.conf detect_trailing_nulls = false /opt/splunkforwarder/etc/system/default/props.conf maxDist = 100 /opt/splunkforwarder/etc/system/default/props.conf priority = /opt/splunkforwarder/etc/system/default/props.conf sourcetype = /opt/splunkforwarder/etc/system/default/props.conf termFrequencyWeightedDist = false /opt/splunkforwarder/etc/system/default/props.conf unarchive_cmd_start_mode = shell    On the local Search Head we have the following props.conf: [rabbitmq:queues:json] KV_MODE = none INDEXED_EXTRACTIONS = json AUTO_KV_JSON = false  We run the btool on the Search Head to verify the settings are getting applied correctly: sudo -u splunk /opt/splunk/bin/splunk btool props list --debug "rabbitmq:queues:json" /opt/splunk/etc/apps/RabbitMQ_Settings/local/props.conf [rabbitmq:queues:json] /opt/splunk/etc/system/default/props.conf ADD_EXTRA_TIME_FIELDS = True /opt/splunk/etc/system/default/props.conf ANNOTATE_PUNCT = True /opt/splunk/etc/apps/RabbitMQ_Settings/local/props.conf AUTO_KV_JSON = false /opt/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE = /opt/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE_DATE = True /opt/splunk/etc/system/default/props.conf CHARSET = UTF-8 /opt/splunk/etc/system/default/props.conf DATETIME_CONFIG = /etc/datetime.xml /opt/splunk/etc/system/default/props.conf DEPTH_LIMIT = 1000 /opt/splunk/etc/system/default/props.conf DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = false /opt/splunk/etc/system/default/props.conf HEADER_MODE = /opt/splunk/etc/apps/RabbitMQ_Settings/local/props.conf INDEXED_EXTRACTIONS = json /opt/splunk/etc/apps/RabbitMQ_Settings/local/props.conf KV_MODE = none /opt/splunk/etc/system/default/props.conf LB_CHUNK_BREAKER_TRUNCATE = 2000000 /opt/splunk/etc/system/default/props.conf LEARN_MODEL = true /opt/splunk/etc/system/default/props.conf LEARN_SOURCETYPE = true /opt/splunk/etc/system/default/props.conf LINE_BREAKER_LOOKBEHIND = 100 /opt/splunk/etc/system/default/props.conf MATCH_LIMIT = 100000 /opt/splunk/etc/system/default/props.conf MAX_DAYS_AGO = 2000 /opt/splunk/etc/system/default/props.conf MAX_DAYS_HENCE = 2 /opt/splunk/etc/system/default/props.conf MAX_DIFF_SECS_AGO = 3600 /opt/splunk/etc/system/default/props.conf MAX_DIFF_SECS_HENCE = 604800 /opt/splunk/etc/system/default/props.conf MAX_EVENTS = 256 /opt/splunk/etc/system/default/props.conf MAX_TIMESTAMP_LOOKAHEAD = 128 /opt/splunk/etc/system/default/props.conf MUST_BREAK_AFTER = /opt/splunk/etc/system/default/props.conf MUST_NOT_BREAK_AFTER = /opt/splunk/etc/system/default/props.conf MUST_NOT_BREAK_BEFORE = /opt/splunk/etc/system/default/props.conf SEGMENTATION = indexing /opt/splunk/etc/system/default/props.conf SEGMENTATION-all = full /opt/splunk/etc/system/default/props.conf SEGMENTATION-inner = inner /opt/splunk/etc/system/default/props.conf SEGMENTATION-outer = outer /opt/splunk/etc/system/default/props.conf SEGMENTATION-raw = none /opt/splunk/etc/system/default/props.conf SEGMENTATION-standard = standard /opt/splunk/etc/system/default/props.conf SHOULD_LINEMERGE = True /opt/splunk/etc/system/default/props.conf TRANSFORMS = /opt/splunk/etc/system/default/props.conf TRUNCATE = 10000 /opt/splunk/etc/system/default/props.conf detect_trailing_nulls = false /opt/splunk/etc/system/default/props.conf maxDist = 100 /opt/splunk/etc/system/default/props.conf priority = /opt/splunk/etc/system/default/props.conf sourcetype = /opt/splunk/etc/system/default/props.conf termFrequencyWeightedDist = false /opt/splunk/etc/system/default/props.conf unarchive_cmd_start_mode = shell   However, even with all that in place, we're still seeing duplicate values when using tables: index="rabbitmq" | table _time messages state    
That's the best method I know of.  There is an app for Apple devices but it's just desktop sized viz so might as well stick with what you have now.
Thank you for your response. The answer is "yes" to both questions. I've tried mapping the role to Name, memberOf, and FriendlyName. It looks like the response uses "DN format," and the example in ... See more...
Thank you for your response. The answer is "yes" to both questions. I've tried mapping the role to Name, memberOf, and FriendlyName. It looks like the response uses "DN format," and the example in the docs is similar to the response I'm receiving. One difference I did notice from the doc, however, is the value it's returning. In the doc, it appears to be returning LDAP memberships: CN=Employee, OU=SAML Test, DC=qa, etc... Our back-end uses Grouper for authorization, and the value looks more like org:sections:managed:employee:saml-test:qa:etc... I wonder if that's confusing Splunk...? I'm grasping at this point.  
If you are not familiar with changing depth_limit check out this material. https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Limitsconf depth_limit = <integer> * Limits the amount of resourc... See more...
If you are not familiar with changing depth_limit check out this material. https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Limitsconf depth_limit = <integer> * Limits the amount of resources that are spent by PCRE when running patterns that will not match. * Use this to limit the depth of nested backtracking in an internal PCRE function, match(). If set too low, PCRE might fail to correctly match a pattern. * Default: 1000 Your match is 1500+ characters into your event.  I know you have sanitized it so you need to check your true data to get the right count.
Thank you, this does work wonderfully. I am still learning and not wanting to be spoon-fed.  Can you explain why you chose to json at the point you did and what is this portion of the piping doing? ... See more...
Thank you, this does work wonderfully. I am still learning and not wanting to be spoon-fed.  Can you explain why you chose to json at the point you did and what is this portion of the piping doing?  | chart sum(count) as count over _raw by DeviceCompliance  The rest of it makes complete sense when I work through it out loud.
You can do that using a shell script in the CLI.  Look for "XYZ" in $SPLUNK_HOME/etc/apps/*/*/savedsearches.conf, $SPLUNK_HOME/etc/system/local/savedsearches.conf, and $SPLUNK_HOME/etc/apps/*/*/data/... See more...
You can do that using a shell script in the CLI.  Look for "XYZ" in $SPLUNK_HOME/etc/apps/*/*/savedsearches.conf, $SPLUNK_HOME/etc/system/local/savedsearches.conf, and $SPLUNK_HOME/etc/apps/*/*/data/ui/views/*.