All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Was this issue wit 9.2.4 or after that when you are starting it wit 9.3.2? Which Linux os distro and version you have and are those same as earlier?
Hi all, I was upgrading Splunk Enterprise from 9.0.x to 9.2.4 and then 9.3.2. When I try to restart the Splunk Service I get the following: Failed to start Systemd service file for Splunk, generate... See more...
Hi all, I was upgrading Splunk Enterprise from 9.0.x to 9.2.4 and then 9.3.2. When I try to restart the Splunk Service I get the following: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'. Unit Splunkd.service entered failed state. Splunkd.service failed. Splunkd.service holdoff time over, scheduling restart. Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. start request repeated too quickly for Splunkd.service Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'. Unit Splunkd.service entered failed state. Splunkd.service failed.   I'll add from a Splunk standpoint I am a complete noob. I did some research on the upgrade process and followed the Splunk documentation.    TIA!
I have seen a lot of similar Questions/Solutions with this aggravating issue, none of which are working.   Trying to pull RabbitMQ API (JSON) data into Splunk. Bash script creates /opt/data/rabbitm... See more...
I have seen a lot of similar Questions/Solutions with this aggravating issue, none of which are working.   Trying to pull RabbitMQ API (JSON) data into Splunk. Bash script creates /opt/data/rabbitmq-queues.json curl -s -u test:test https://localhost:15671/api/queues | jq > /opt/data/rabbitmq-queues.json   Universal Forwarder has following props.conf on the RabbitMQ server: [rabbitmq:queues:json] AUTO_KV_JSON = false INDEXED_EXTRACTIONS = JSON KV_MODE = none  And the inputs.conf:  [batch:///opt/data/rabbitmq-queues.json] disabled = false index = rabbitmq sourcetype = rabbitmq:queues:json move_policy = sinkhole crcSalt = <SOURCE> initCrcLength = 1048576 We run the btool on the Universal Forwarder to verify the settings are getting applied correctly: sudo /opt/splunkforwarder/bin/splunk btool props list --debug "rabbitmq:queues:json" /opt/splunkforwarder/etc/apps/RabbitMQ_Settings/local/props.conf [rabbitmq:queues:json] /opt/splunkforwarder/etc/system/default/props.conf ADD_EXTRA_TIME_FIELDS = True /opt/splunkforwarder/etc/system/default/props.conf ANNOTATE_PUNCT = True /opt/splunkforwarder/etc/apps/RabbitMQ_Settings/local/props.conf AUTO_KV_JSON = false /opt/splunkforwarder/etc/system/default/props.conf BREAK_ONLY_BEFORE = /opt/splunkforwarder/etc/system/default/props.conf BREAK_ONLY_BEFORE_DATE = True /opt/splunkforwarder/etc/system/default/props.conf CHARSET = UTF-8 /opt/splunkforwarder/etc/system/default/props.conf DATETIME_CONFIG = /etc/datetime.xml /opt/splunkforwarder/etc/system/default/props.conf DEPTH_LIMIT = 1000 /opt/splunkforwarder/etc/system/default/props.conf DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = false /opt/splunkforwarder/etc/system/default/props.conf HEADER_MODE = /opt/splunkforwarder/etc/apps/RabbitMQ_Settings/local/props.conf INDEXED_EXTRACTIONS = JSON /opt/splunkforwarder/etc/apps/RabbitMQ_Settings/local/props.conf KV_MODE = none /opt/splunkforwarder/etc/system/default/props.conf LB_CHUNK_BREAKER_TRUNCATE = 2000000 /opt/splunkforwarder/etc/system/default/props.conf LEARN_MODEL = true /opt/splunkforwarder/etc/system/default/props.conf LEARN_SOURCETYPE = true /opt/splunkforwarder/etc/system/default/props.conf LINE_BREAKER_LOOKBEHIND = 100 /opt/splunkforwarder/etc/system/default/props.conf MATCH_LIMIT = 100000 /opt/splunkforwarder/etc/system/default/props.conf MAX_DAYS_AGO = 2000 /opt/splunkforwarder/etc/system/default/props.conf MAX_DAYS_HENCE = 2 /opt/splunkforwarder/etc/system/default/props.conf MAX_DIFF_SECS_AGO = 3600 /opt/splunkforwarder/etc/system/default/props.conf MAX_DIFF_SECS_HENCE = 604800 /opt/splunkforwarder/etc/system/default/props.conf MAX_EVENTS = 256 /opt/splunkforwarder/etc/system/default/props.conf MAX_TIMESTAMP_LOOKAHEAD = 128 /opt/splunkforwarder/etc/system/default/props.conf MUST_BREAK_AFTER = /opt/splunkforwarder/etc/system/default/props.conf MUST_NOT_BREAK_AFTER = /opt/splunkforwarder/etc/system/default/props.conf MUST_NOT_BREAK_BEFORE = /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION = indexing /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-all = full /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-inner = inner /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-outer = outer /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-raw = none /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-standard = standard /opt/splunkforwarder/etc/system/default/props.conf SHOULD_LINEMERGE = True /opt/splunkforwarder/etc/system/default/props.conf TRANSFORMS = /opt/splunkforwarder/etc/system/default/props.conf TRUNCATE = 10000 /opt/splunkforwarder/etc/system/default/props.conf detect_trailing_nulls = false /opt/splunkforwarder/etc/system/default/props.conf maxDist = 100 /opt/splunkforwarder/etc/system/default/props.conf priority = /opt/splunkforwarder/etc/system/default/props.conf sourcetype = /opt/splunkforwarder/etc/system/default/props.conf termFrequencyWeightedDist = false /opt/splunkforwarder/etc/system/default/props.conf unarchive_cmd_start_mode = shell    On the local Search Head we have the following props.conf: [rabbitmq:queues:json] KV_MODE = none INDEXED_EXTRACTIONS = json AUTO_KV_JSON = false  We run the btool on the Search Head to verify the settings are getting applied correctly: sudo -u splunk /opt/splunk/bin/splunk btool props list --debug "rabbitmq:queues:json" /opt/splunk/etc/apps/RabbitMQ_Settings/local/props.conf [rabbitmq:queues:json] /opt/splunk/etc/system/default/props.conf ADD_EXTRA_TIME_FIELDS = True /opt/splunk/etc/system/default/props.conf ANNOTATE_PUNCT = True /opt/splunk/etc/apps/RabbitMQ_Settings/local/props.conf AUTO_KV_JSON = false /opt/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE = /opt/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE_DATE = True /opt/splunk/etc/system/default/props.conf CHARSET = UTF-8 /opt/splunk/etc/system/default/props.conf DATETIME_CONFIG = /etc/datetime.xml /opt/splunk/etc/system/default/props.conf DEPTH_LIMIT = 1000 /opt/splunk/etc/system/default/props.conf DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = false /opt/splunk/etc/system/default/props.conf HEADER_MODE = /opt/splunk/etc/apps/RabbitMQ_Settings/local/props.conf INDEXED_EXTRACTIONS = json /opt/splunk/etc/apps/RabbitMQ_Settings/local/props.conf KV_MODE = none /opt/splunk/etc/system/default/props.conf LB_CHUNK_BREAKER_TRUNCATE = 2000000 /opt/splunk/etc/system/default/props.conf LEARN_MODEL = true /opt/splunk/etc/system/default/props.conf LEARN_SOURCETYPE = true /opt/splunk/etc/system/default/props.conf LINE_BREAKER_LOOKBEHIND = 100 /opt/splunk/etc/system/default/props.conf MATCH_LIMIT = 100000 /opt/splunk/etc/system/default/props.conf MAX_DAYS_AGO = 2000 /opt/splunk/etc/system/default/props.conf MAX_DAYS_HENCE = 2 /opt/splunk/etc/system/default/props.conf MAX_DIFF_SECS_AGO = 3600 /opt/splunk/etc/system/default/props.conf MAX_DIFF_SECS_HENCE = 604800 /opt/splunk/etc/system/default/props.conf MAX_EVENTS = 256 /opt/splunk/etc/system/default/props.conf MAX_TIMESTAMP_LOOKAHEAD = 128 /opt/splunk/etc/system/default/props.conf MUST_BREAK_AFTER = /opt/splunk/etc/system/default/props.conf MUST_NOT_BREAK_AFTER = /opt/splunk/etc/system/default/props.conf MUST_NOT_BREAK_BEFORE = /opt/splunk/etc/system/default/props.conf SEGMENTATION = indexing /opt/splunk/etc/system/default/props.conf SEGMENTATION-all = full /opt/splunk/etc/system/default/props.conf SEGMENTATION-inner = inner /opt/splunk/etc/system/default/props.conf SEGMENTATION-outer = outer /opt/splunk/etc/system/default/props.conf SEGMENTATION-raw = none /opt/splunk/etc/system/default/props.conf SEGMENTATION-standard = standard /opt/splunk/etc/system/default/props.conf SHOULD_LINEMERGE = True /opt/splunk/etc/system/default/props.conf TRANSFORMS = /opt/splunk/etc/system/default/props.conf TRUNCATE = 10000 /opt/splunk/etc/system/default/props.conf detect_trailing_nulls = false /opt/splunk/etc/system/default/props.conf maxDist = 100 /opt/splunk/etc/system/default/props.conf priority = /opt/splunk/etc/system/default/props.conf sourcetype = /opt/splunk/etc/system/default/props.conf termFrequencyWeightedDist = false /opt/splunk/etc/system/default/props.conf unarchive_cmd_start_mode = shell   However, even with all that in place, we're still seeing duplicate values when using tables: index="rabbitmq" | table _time messages state    
That's the best method I know of.  There is an app for Apple devices but it's just desktop sized viz so might as well stick with what you have now.
Thank you for your response. The answer is "yes" to both questions. I've tried mapping the role to Name, memberOf, and FriendlyName. It looks like the response uses "DN format," and the example in ... See more...
Thank you for your response. The answer is "yes" to both questions. I've tried mapping the role to Name, memberOf, and FriendlyName. It looks like the response uses "DN format," and the example in the docs is similar to the response I'm receiving. One difference I did notice from the doc, however, is the value it's returning. In the doc, it appears to be returning LDAP memberships: CN=Employee, OU=SAML Test, DC=qa, etc... Our back-end uses Grouper for authorization, and the value looks more like org:sections:managed:employee:saml-test:qa:etc... I wonder if that's confusing Splunk...? I'm grasping at this point.  
If you are not familiar with changing depth_limit check out this material. https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Limitsconf depth_limit = <integer> * Limits the amount of resourc... See more...
If you are not familiar with changing depth_limit check out this material. https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Limitsconf depth_limit = <integer> * Limits the amount of resources that are spent by PCRE when running patterns that will not match. * Use this to limit the depth of nested backtracking in an internal PCRE function, match(). If set too low, PCRE might fail to correctly match a pattern. * Default: 1000 Your match is 1500+ characters into your event.  I know you have sanitized it so you need to check your true data to get the right count.
Thank you, this does work wonderfully. I am still learning and not wanting to be spoon-fed.  Can you explain why you chose to json at the point you did and what is this portion of the piping doing? ... See more...
Thank you, this does work wonderfully. I am still learning and not wanting to be spoon-fed.  Can you explain why you chose to json at the point you did and what is this portion of the piping doing?  | chart sum(count) as count over _raw by DeviceCompliance  The rest of it makes complete sense when I work through it out loud.
You can do that using a shell script in the CLI.  Look for "XYZ" in $SPLUNK_HOME/etc/apps/*/*/savedsearches.conf, $SPLUNK_HOME/etc/system/local/savedsearches.conf, and $SPLUNK_HOME/etc/apps/*/*/data/... See more...
You can do that using a shell script in the CLI.  Look for "XYZ" in $SPLUNK_HOME/etc/apps/*/*/savedsearches.conf, $SPLUNK_HOME/etc/system/local/savedsearches.conf, and $SPLUNK_HOME/etc/apps/*/*/data/ui/views/*.
What is the best way to display dashboards/visualizations on multiple screens in a SOC? I am using RaspberryPi, firefox, and a tab cycle add-on right now.  There has to be a better way.  
Worked for me, thank you
Fix for me was to rename the expired server.pem certificate file name, restart Splunk which automatically created a new server.pem certificate, then splunk clean kvstore --local , then splunk migrate... See more...
Fix for me was to rename the expired server.pem certificate file name, restart Splunk which automatically created a new server.pem certificate, then splunk clean kvstore --local , then splunk migrate kvstore-storage-engine --target-engine wiredTiger
Thank you, it worked for me
This is not a particulary crucial question but it has been nagging me for a while. When applying changes to indexes.conf on the manager node I usually do a calidate -> show -> apply round. But I am ... See more...
This is not a particulary crucial question but it has been nagging me for a while. When applying changes to indexes.conf on the manager node I usually do a calidate -> show -> apply round. But I am somewhout confused about a detail. When you validate a bundle after making modifications you get an updated "last_validated_bundle" checksum. In my mind, when you then apply this bundle the now "last_validated_bundle" checksum should become the new "latest_bundle" and "active_bundle". But this is not the case.  $ splunk apply cluster-bundle Creating new bundle with checksum=<does_not_match_last_validated_bundle> Applying new bundle. The peers may restart depending on the configurations in applied bundle. Please run 'splunk show cluster-bundle-status' for checking the status of the applied bundle. OK After the new bundle has been applied active_bundle, latest_bundle and last_validated_bundle checksums all match again. But why is the checksum produced after bundle validation not matching the checksum for the applied bundle? Have a great weekend!
Dear experts In my dashboard I have a time picker providing the token t_time.  My search index="abc" search_name="def" [| makeresults | eval earliest=relative_time($t_time.latest$,"... See more...
Dear experts In my dashboard I have a time picker providing the token t_time.  My search index="abc" search_name="def" [| makeresults | eval earliest=relative_time($t_time.latest$,"-1d@d") | eval latest=relative_time($t_time.latest$,"@d") | fields earliest latest | format] | table _time zbpIdentifier Should pick up that token and make sure only data is displayed from the last full day before t_time.latest. 2024-12-12 13:13 should be converted to earliest = 2024-12-11 00:00 latest = 2024-12-11 23:59:59 (or 2024-12-12 00:00) As long really two dates are selected in the time picker, all works as expected.  If e.g. last 7 days is selected the search fails, no data is returned.  I'm guessing that in relative mode $t_time.latest$ is represented with something like "now", which causes problems for the relative_date function.  So the question is: how to detect this "now" and turn it into a date understood by relative_date?  
Hi @YuliyaVassilyev , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi i initially created a index name with XYZ and there are around 60 reports alerts and 15 dashboard created on this index now the index name has to be changed with XYZ_audit and i have to update all... See more...
Hi i initially created a index name with XYZ and there are around 60 reports alerts and 15 dashboard created on this index now the index name has to be changed with XYZ_audit and i have to update all these reports with neaw name of the index can i do this automatically using a script or any other way 
Ah okay, then I would execute it in following order: 1. Do the fresh installation 2. copy all custom apps and their configuration files to %SplunkHome/etc/apps/ from your source 3. start splunk
I am a beginner with Splunk. I am setting up Splunk Enterprise in a three-tier architecture with a Search Head server, an Indexer server, and a Heavy Forwarder server. I want to install the Splunk A... See more...
I am a beginner with Splunk. I am setting up Splunk Enterprise in a three-tier architecture with a Search Head server, an Indexer server, and a Heavy Forwarder server. I want to install the Splunk Add-on for Microsoft Cloud Services on the Heavy Forwarder server to ingest data from Azure Event Hubs. However, when I check the logs of the installed add-on, I see the following error: (splunk_ta_microsoft-cloudservices_azure_audit.log) splunk_ta_microsoft-cloudservices_azure_audit.log 2024-12-13 02:44:48,835 +0000 log_level=ERROR, pid=33699, tid=MainThread, file=rest.py, func_name=splunkd_request, code_line_no=67 | Failed to send rest request=https://127.0.0.1:8089/services/server/info, errcode=unknown, reason=Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/connection.py", line 175, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/util/connection.py", line 95, in create_connection raise err File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/connectionpool.py", line 723, in urlopen chunked=chunked, File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/connectionpool.py", line 404, in _make_request self._validate_conn(conn) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/connectionpool.py", line 1061, in _validate_conn conn.connect() File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/connection.py", line 363, in connect self.sock = conn = self._new_conn() File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/connection.py", line 187, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f48c2a95e50>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: ~~~ Concern Point #1 It seems that the error has been resolved by adding the following line to /opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/local/web.conf  (just changing the request destination from <local of the Heavy Forwarder server>​ to <IP address of the Search Head server> ) [settings] mgmtHostPort = <IP address of the Search Head server>:8089 However, I am now seeing the following log, and a 401 is being returned. The request destination is https://127.0.0.1:8089/servicesNS/nobody/Splunk_TA_microsoft-cloudservices/splunk_ta_mscs_settings?count=-1 Concern Point #2 I thought I could resolve Concern Point #1 in the same way by changing the request destination to the <IP address of the Search Head server> , but I don't know how to do that (I'm unsure if this approach is correct, so I would appreciate your guidance). splunk_ta_microsoft-cloudservices_azure_audit.log 2024-12-13 10:41:22,011 +0000 log_level=ERROR, pid=194872, tid=MainThread, file=config.py, func_name=log, code_line_no=66 | UCC Config Module: Fail to load endpoint "global_settings" - Unspecified internal server error. reason={"messages":[{"type":"ERROR","text":"Unexpected error \"<class 'splunktaucclib.rest_handler.error.RestError'>\" from python handler: \"REST Error [401]: Unauthorized -- call not properly authenticated\". See splunkd.log/python.log for more details."}]} File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/mscs_azure_audit.py", line 21, in <module> schema_para_list=("description",), File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_mod_input.py", line 232, in main log_suffix=log_suffix, File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_mod_input.py", line 130, in run tconfig = tc.create_ta_config(settings, config_cls or tc.TaConfig, log_suffix) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_config.py", line 228, in create_ta_config return config_cls(meta_config, settings, stanza_name, log_suffix) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_config.py", line 53, in __init__ self._load_task_configs() File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_config.py", line 75, in _load_task_configs config_handler = th.ConfigSchemaHandler(self._meta_config, self._client_schema) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_helper.py", line 95, in __init__ self._load_conf_contents() File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_helper.py", line 120, in _load_conf_contents self._all_conf_contents = self._config.load() File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/config.py", line 143, in load log(msg, level=logging.ERROR, need_tb=True) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/config.py", line 64, in log stack = "".join(traceback.format_stack()) NoneType: None ~~~ Supplementary Information The results of `curl` commands on the Heavy Forwarder server are as follows: curl -k https://<IP address of the Search Head server>:8089/services/server/info → 200 curl -k https://<IP address of the Indexer server>:8089/services/server/info → 200 curl -k https://127.0.0.1:8089/services/server/info → 401 (Unauthorized) curl -k https://<IP address of the Search Head server>/servicesNS/nobody/Splunk_TA_microsoft-cloudservices/splunk_ta_mscs_settings?count=-1 401 (Unauthorized) curl -k https://<IP address of the Indexer server>:8089/servicesNS/nobody/Splunk_TA_microsoft-cloudservices/splunk_ta_mscs_settings?count=-1 → 401 (Unauthorized) curl -k https://127.0.0.1:8089/servicesNS/nobody/Splunk_TA_microsoft-cloudservices/splunk_ta_mscs_settings?count=-1 → 401 (Unauthorized) If you need any further adjustments or specific aspects to focus on, please let me know!
Don’t have a deployment server hence copying the home folder across.
Hello guys. Hope someone can help us out. I am using the Enterprise and am trying to store the events after CIM mapping (via Data Model) to the S3 bucket but it doesn't seem to be possible to be co... See more...
Hello guys. Hope someone can help us out. I am using the Enterprise and am trying to store the events after CIM mapping (via Data Model) to the S3 bucket but it doesn't seem to be possible to be configured on the Splunk side. My current approach is that I have created the Scheduled Search with Report and stream the results to the summary_index. Also I created Ingest Action to stream all incoming events from summary_index to the S3. The workflow works fine just expect we get the same raw events written to the S3 and what we want - is to have MAPPED events stored to S3. Do you know if/how we can stream mapped events from one index into another?   Some more details: The reason behind it is that raw event has nested collections that we would like to reconfigure before giving back to the user. That's why initial thought was to implement logic that: 1 Our_Service sends data to Splunk 2 Splunk performs needed mapping and send mapped data to S3 3 Our_Service queries the bucket to get that formatted data I was also trying to reuse the same tstats search as we do for the dashboard but eventually that becomes a table and won't show the events but rather table in statistics so the summary_index stays empty in that case. Any help is highly appreciated