All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How to pass earliest and latest values to a data model search?  Example if I select a time range picker of last 30 mins but still give earliest and latest in the normal search of last 24 hours, then ... See more...
How to pass earliest and latest values to a data model search?  Example if I select a time range picker of last 30 mins but still give earliest and latest in the normal search of last 24 hours, then earliest and latest parameters take precedence and works in a normal search. How to implement the same with datamodel query?  
Is it impossible to apply SSL to HEC in the Splunk trial version?  
I have configured Splunk with SAML (ADFS) but We are facing an issue during logout, with the following error message: "Failed to validate SAML logout response received from IdP I have inserted t... See more...
I have configured Splunk with SAML (ADFS) but We are facing an issue during logout, with the following error message: "Failed to validate SAML logout response received from IdP I have inserted the below URL in logout in SAML configuration  " https://my_sh:8000/saml/logout" how can I overcome this issue??
I had used Splunk Enterprise(Free Trial version)  and Universal Forwarder on my PC(Windows11). But, I uninstalled these becouse my PC's trouble. I want to re-install SE and UF, but installers outpu... See more...
I had used Splunk Enterprise(Free Trial version)  and Universal Forwarder on my PC(Windows11). But, I uninstalled these becouse my PC's trouble. I want to re-install SE and UF, but installers output error and "This version Splunk Enterprsise has already been installed in this PC". I tried deleting registory editor and program files of Splunk and UniversalFowarder,  run command "sc delete Splunk" in cmd. But installer's output is same. If you know this troubleshooting, please tell me.
I need to replace the command wc-l because I want to saw a dashboard of the total of messages on a source.
We have more than one instance of S1 configured in the SentinelOne app on our SH. We do NOT have the S1 TA installed anywhere else. We have noticed that you can only specific a single "SentinelOne Se... See more...
We have more than one instance of S1 configured in the SentinelOne app on our SH. We do NOT have the S1 TA installed anywhere else. We have noticed that you can only specific a single "SentinelOne Search Index" in the base configuration. We have more than one index because we have configured each instance we have to go to different indexes. Because of this, the only index where events are typed and tagged properly is the index we have selected in the app.  Anyone know how we can get around this and have the events in the other indexes typed and tagged correctly?
We fail again and again these days when we have major spikes in ingestion, primarily with HEC. What would be a good and efficient way to detect major up/down spikes in data ingestion. 
On Splunk cloud, we can receive HEC ingestion directly to the cloud whereas on-prem we install distinct subclusters for HEC and struggle to scale them up with multiple down-times cases. I wonder whet... See more...
On Splunk cloud, we can receive HEC ingestion directly to the cloud whereas on-prem we install distinct subclusters for HEC and struggle to scale them up with multiple down-times cases. I wonder whether it's possible for an on-prem installation to send HEC data directly to the indexers. 
We have a case where the data resides under /usr/feith/log/*.log and the Splunk process can read these files however, when I log in to the unix server I cannot navigate into this directory as the Spl... See more...
We have a case where the data resides under /usr/feith/log/*.log and the Splunk process can read these files however, when I log in to the unix server I cannot navigate into this directory as the Splunk user. What's going on? bash-4.4$ whoami splunk bash-4.4$ pwd /usr/feith bash-4.4$ \ls -tlr total 388 ... drwxr-xr-x. 2 feith feith 4096 Dec 12 12:17 lib drwx------. 19 feith feith 4096 Dec 13 01:00 log bash-4.4$ cd log/ bash: cd: log/: Permission denied  
Hello Team, I have successfully set up Splunk Observability Cloud to monitor Amazon Web Services through Amazon CloudWatch and can now observe all AWS services via IAM role. Additionally, I have a ... See more...
Hello Team, I have successfully set up Splunk Observability Cloud to monitor Amazon Web Services through Amazon CloudWatch and can now observe all AWS services via IAM role. Additionally, I have a gRPC application running on an AWS EC2 instance, which generates custom metrics using a StatsD server via Golang. I would like to send these custom metrics to Splunk Observability Cloud to monitor the health of the gRPC application, along with the logs it generates. On my AWS Linux machine, I can see that the host monitoring agent is installed and the splunk-otel-collector service is running. Could you please advise on the method to send the custom metrics and logs generated by the StatsD server from the Golang gRPC application to Splunk Observability Cloud for monitoring? Thank you.
Hello Splunk Community, I am running Splunk Enterprise Version: 9.2.3 Steps to reproduce: Make a config change to an app on the Cluster Manager - $SPLUNK_HOME/etc/master-apps/<custom_app>/local/i... See more...
Hello Splunk Community, I am running Splunk Enterprise Version: 9.2.3 Steps to reproduce: Make a config change to an app on the Cluster Manager - $SPLUNK_HOME/etc/master-apps/<custom_app>/local/indexes.conf Validate and Check Restart from Cluster Manager GUI. Bundle Information: Updated Time shows a date/time from last month (did not update) The Active Bundle ID did not change Unable to make changes to apps and have them pushed to Indexers. Note: there are other issues All three of my clustered Indexers are in Automatic Detention Seeing these Messages on GUI: Search peer xxx has the following message: The minimum free disk space (1000MB) reached for /opt/splunk/var/run/splunk/dispatch. Search peer xxx has the following message: Now skipping indexing of internal audit events, because the downstream queue is not accepting data. Will keep dropping events until data flow resumes. Review system health: ensure downstream indexing and/or forwarding are operating correctly Ultimatley, I am trying to push changes to the setting frozenTimePeriodInSecs to reduce stored logs and free up space.  Thanks for your help
Hi All, I am trying to create summary index for Cisco ESA Textmail logs. I will then rebuild the Email data model using the summary index. The scheduled search is running correctly but when I try t... See more...
Hi All, I am trying to create summary index for Cisco ESA Textmail logs. I will then rebuild the Email data model using the summary index. The scheduled search is running correctly but when I try to search the summary index I get no events returned. How does one check that events are going into the summary index correctly? Steps Taken Created a new index called email_summary I have created a scheduled search to run every 15 minutes In the settings I have ticked 'Enable summary indexing' Saved Search   index=email sourcetype=cisco:esa:textmail | stats values(action) as action, values(dest) as dest, values(duration) as duration, values(file_name) as file_name, values(message_id) as message_id, values(recipient) as recipient, dc(recipient) as recipient_count, values(recipient_domain) as recipient_domain, values(src) as src, values(src_user) as src_user, values(src_user_domain) as src_user_domain, values(message_subject) as subject, values(tag) as tag, values(url) as url, values(user) AS user values(vendor_product) as vendor_product, values(vendor_action) as filter_action, values(reputation_score) as filter_score BY internal_message_id     Thanks, Dave
Hi all, I was upgrading Splunk Enterprise from 9.0.x to 9.2.4 and then 9.3.2. When I try to restart the Splunk Service I get the following: Failed to start Systemd service file for Splunk, generate... See more...
Hi all, I was upgrading Splunk Enterprise from 9.0.x to 9.2.4 and then 9.3.2. When I try to restart the Splunk Service I get the following: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'. Unit Splunkd.service entered failed state. Splunkd.service failed. Splunkd.service holdoff time over, scheduling restart. Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. start request repeated too quickly for Splunkd.service Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'. Unit Splunkd.service entered failed state. Splunkd.service failed.   I'll add from a Splunk standpoint I am a complete noob. I did some research on the upgrade process and followed the Splunk documentation.    TIA!
I have seen a lot of similar Questions/Solutions with this aggravating issue, none of which are working.   Trying to pull RabbitMQ API (JSON) data into Splunk. Bash script creates /opt/data/rabbitm... See more...
I have seen a lot of similar Questions/Solutions with this aggravating issue, none of which are working.   Trying to pull RabbitMQ API (JSON) data into Splunk. Bash script creates /opt/data/rabbitmq-queues.json curl -s -u test:test https://localhost:15671/api/queues | jq > /opt/data/rabbitmq-queues.json   Universal Forwarder has following props.conf on the RabbitMQ server: [rabbitmq:queues:json] AUTO_KV_JSON = false INDEXED_EXTRACTIONS = JSON KV_MODE = none  And the inputs.conf:  [batch:///opt/data/rabbitmq-queues.json] disabled = false index = rabbitmq sourcetype = rabbitmq:queues:json move_policy = sinkhole crcSalt = <SOURCE> initCrcLength = 1048576 We run the btool on the Universal Forwarder to verify the settings are getting applied correctly: sudo /opt/splunkforwarder/bin/splunk btool props list --debug "rabbitmq:queues:json" /opt/splunkforwarder/etc/apps/RabbitMQ_Settings/local/props.conf [rabbitmq:queues:json] /opt/splunkforwarder/etc/system/default/props.conf ADD_EXTRA_TIME_FIELDS = True /opt/splunkforwarder/etc/system/default/props.conf ANNOTATE_PUNCT = True /opt/splunkforwarder/etc/apps/RabbitMQ_Settings/local/props.conf AUTO_KV_JSON = false /opt/splunkforwarder/etc/system/default/props.conf BREAK_ONLY_BEFORE = /opt/splunkforwarder/etc/system/default/props.conf BREAK_ONLY_BEFORE_DATE = True /opt/splunkforwarder/etc/system/default/props.conf CHARSET = UTF-8 /opt/splunkforwarder/etc/system/default/props.conf DATETIME_CONFIG = /etc/datetime.xml /opt/splunkforwarder/etc/system/default/props.conf DEPTH_LIMIT = 1000 /opt/splunkforwarder/etc/system/default/props.conf DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = false /opt/splunkforwarder/etc/system/default/props.conf HEADER_MODE = /opt/splunkforwarder/etc/apps/RabbitMQ_Settings/local/props.conf INDEXED_EXTRACTIONS = JSON /opt/splunkforwarder/etc/apps/RabbitMQ_Settings/local/props.conf KV_MODE = none /opt/splunkforwarder/etc/system/default/props.conf LB_CHUNK_BREAKER_TRUNCATE = 2000000 /opt/splunkforwarder/etc/system/default/props.conf LEARN_MODEL = true /opt/splunkforwarder/etc/system/default/props.conf LEARN_SOURCETYPE = true /opt/splunkforwarder/etc/system/default/props.conf LINE_BREAKER_LOOKBEHIND = 100 /opt/splunkforwarder/etc/system/default/props.conf MATCH_LIMIT = 100000 /opt/splunkforwarder/etc/system/default/props.conf MAX_DAYS_AGO = 2000 /opt/splunkforwarder/etc/system/default/props.conf MAX_DAYS_HENCE = 2 /opt/splunkforwarder/etc/system/default/props.conf MAX_DIFF_SECS_AGO = 3600 /opt/splunkforwarder/etc/system/default/props.conf MAX_DIFF_SECS_HENCE = 604800 /opt/splunkforwarder/etc/system/default/props.conf MAX_EVENTS = 256 /opt/splunkforwarder/etc/system/default/props.conf MAX_TIMESTAMP_LOOKAHEAD = 128 /opt/splunkforwarder/etc/system/default/props.conf MUST_BREAK_AFTER = /opt/splunkforwarder/etc/system/default/props.conf MUST_NOT_BREAK_AFTER = /opt/splunkforwarder/etc/system/default/props.conf MUST_NOT_BREAK_BEFORE = /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION = indexing /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-all = full /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-inner = inner /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-outer = outer /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-raw = none /opt/splunkforwarder/etc/system/default/props.conf SEGMENTATION-standard = standard /opt/splunkforwarder/etc/system/default/props.conf SHOULD_LINEMERGE = True /opt/splunkforwarder/etc/system/default/props.conf TRANSFORMS = /opt/splunkforwarder/etc/system/default/props.conf TRUNCATE = 10000 /opt/splunkforwarder/etc/system/default/props.conf detect_trailing_nulls = false /opt/splunkforwarder/etc/system/default/props.conf maxDist = 100 /opt/splunkforwarder/etc/system/default/props.conf priority = /opt/splunkforwarder/etc/system/default/props.conf sourcetype = /opt/splunkforwarder/etc/system/default/props.conf termFrequencyWeightedDist = false /opt/splunkforwarder/etc/system/default/props.conf unarchive_cmd_start_mode = shell    On the local Search Head we have the following props.conf: [rabbitmq:queues:json] KV_MODE = none INDEXED_EXTRACTIONS = json AUTO_KV_JSON = false  We run the btool on the Search Head to verify the settings are getting applied correctly: sudo -u splunk /opt/splunk/bin/splunk btool props list --debug "rabbitmq:queues:json" /opt/splunk/etc/apps/RabbitMQ_Settings/local/props.conf [rabbitmq:queues:json] /opt/splunk/etc/system/default/props.conf ADD_EXTRA_TIME_FIELDS = True /opt/splunk/etc/system/default/props.conf ANNOTATE_PUNCT = True /opt/splunk/etc/apps/RabbitMQ_Settings/local/props.conf AUTO_KV_JSON = false /opt/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE = /opt/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE_DATE = True /opt/splunk/etc/system/default/props.conf CHARSET = UTF-8 /opt/splunk/etc/system/default/props.conf DATETIME_CONFIG = /etc/datetime.xml /opt/splunk/etc/system/default/props.conf DEPTH_LIMIT = 1000 /opt/splunk/etc/system/default/props.conf DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = false /opt/splunk/etc/system/default/props.conf HEADER_MODE = /opt/splunk/etc/apps/RabbitMQ_Settings/local/props.conf INDEXED_EXTRACTIONS = json /opt/splunk/etc/apps/RabbitMQ_Settings/local/props.conf KV_MODE = none /opt/splunk/etc/system/default/props.conf LB_CHUNK_BREAKER_TRUNCATE = 2000000 /opt/splunk/etc/system/default/props.conf LEARN_MODEL = true /opt/splunk/etc/system/default/props.conf LEARN_SOURCETYPE = true /opt/splunk/etc/system/default/props.conf LINE_BREAKER_LOOKBEHIND = 100 /opt/splunk/etc/system/default/props.conf MATCH_LIMIT = 100000 /opt/splunk/etc/system/default/props.conf MAX_DAYS_AGO = 2000 /opt/splunk/etc/system/default/props.conf MAX_DAYS_HENCE = 2 /opt/splunk/etc/system/default/props.conf MAX_DIFF_SECS_AGO = 3600 /opt/splunk/etc/system/default/props.conf MAX_DIFF_SECS_HENCE = 604800 /opt/splunk/etc/system/default/props.conf MAX_EVENTS = 256 /opt/splunk/etc/system/default/props.conf MAX_TIMESTAMP_LOOKAHEAD = 128 /opt/splunk/etc/system/default/props.conf MUST_BREAK_AFTER = /opt/splunk/etc/system/default/props.conf MUST_NOT_BREAK_AFTER = /opt/splunk/etc/system/default/props.conf MUST_NOT_BREAK_BEFORE = /opt/splunk/etc/system/default/props.conf SEGMENTATION = indexing /opt/splunk/etc/system/default/props.conf SEGMENTATION-all = full /opt/splunk/etc/system/default/props.conf SEGMENTATION-inner = inner /opt/splunk/etc/system/default/props.conf SEGMENTATION-outer = outer /opt/splunk/etc/system/default/props.conf SEGMENTATION-raw = none /opt/splunk/etc/system/default/props.conf SEGMENTATION-standard = standard /opt/splunk/etc/system/default/props.conf SHOULD_LINEMERGE = True /opt/splunk/etc/system/default/props.conf TRANSFORMS = /opt/splunk/etc/system/default/props.conf TRUNCATE = 10000 /opt/splunk/etc/system/default/props.conf detect_trailing_nulls = false /opt/splunk/etc/system/default/props.conf maxDist = 100 /opt/splunk/etc/system/default/props.conf priority = /opt/splunk/etc/system/default/props.conf sourcetype = /opt/splunk/etc/system/default/props.conf termFrequencyWeightedDist = false /opt/splunk/etc/system/default/props.conf unarchive_cmd_start_mode = shell   However, even with all that in place, we're still seeing duplicate values when using tables: index="rabbitmq" | table _time messages state    
What is the best way to display dashboards/visualizations on multiple screens in a SOC? I am using RaspberryPi, firefox, and a tab cycle add-on right now.  There has to be a better way.  
This is not a particulary crucial question but it has been nagging me for a while. When applying changes to indexes.conf on the manager node I usually do a calidate -> show -> apply round. But I am ... See more...
This is not a particulary crucial question but it has been nagging me for a while. When applying changes to indexes.conf on the manager node I usually do a calidate -> show -> apply round. But I am somewhout confused about a detail. When you validate a bundle after making modifications you get an updated "last_validated_bundle" checksum. In my mind, when you then apply this bundle the now "last_validated_bundle" checksum should become the new "latest_bundle" and "active_bundle". But this is not the case.  $ splunk apply cluster-bundle Creating new bundle with checksum=<does_not_match_last_validated_bundle> Applying new bundle. The peers may restart depending on the configurations in applied bundle. Please run 'splunk show cluster-bundle-status' for checking the status of the applied bundle. OK After the new bundle has been applied active_bundle, latest_bundle and last_validated_bundle checksums all match again. But why is the checksum produced after bundle validation not matching the checksum for the applied bundle? Have a great weekend!
Dear experts In my dashboard I have a time picker providing the token t_time.  My search index="abc" search_name="def" [| makeresults | eval earliest=relative_time($t_time.latest$,"... See more...
Dear experts In my dashboard I have a time picker providing the token t_time.  My search index="abc" search_name="def" [| makeresults | eval earliest=relative_time($t_time.latest$,"-1d@d") | eval latest=relative_time($t_time.latest$,"@d") | fields earliest latest | format] | table _time zbpIdentifier Should pick up that token and make sure only data is displayed from the last full day before t_time.latest. 2024-12-12 13:13 should be converted to earliest = 2024-12-11 00:00 latest = 2024-12-11 23:59:59 (or 2024-12-12 00:00) As long really two dates are selected in the time picker, all works as expected.  If e.g. last 7 days is selected the search fails, no data is returned.  I'm guessing that in relative mode $t_time.latest$ is represented with something like "now", which causes problems for the relative_date function.  So the question is: how to detect this "now" and turn it into a date understood by relative_date?  
Hi i initially created a index name with XYZ and there are around 60 reports alerts and 15 dashboard created on this index now the index name has to be changed with XYZ_audit and i have to update all... See more...
Hi i initially created a index name with XYZ and there are around 60 reports alerts and 15 dashboard created on this index now the index name has to be changed with XYZ_audit and i have to update all these reports with neaw name of the index can i do this automatically using a script or any other way 
I am a beginner with Splunk. I am setting up Splunk Enterprise in a three-tier architecture with a Search Head server, an Indexer server, and a Heavy Forwarder server. I want to install the Splunk A... See more...
I am a beginner with Splunk. I am setting up Splunk Enterprise in a three-tier architecture with a Search Head server, an Indexer server, and a Heavy Forwarder server. I want to install the Splunk Add-on for Microsoft Cloud Services on the Heavy Forwarder server to ingest data from Azure Event Hubs. However, when I check the logs of the installed add-on, I see the following error: (splunk_ta_microsoft-cloudservices_azure_audit.log) splunk_ta_microsoft-cloudservices_azure_audit.log 2024-12-13 02:44:48,835 +0000 log_level=ERROR, pid=33699, tid=MainThread, file=rest.py, func_name=splunkd_request, code_line_no=67 | Failed to send rest request=https://127.0.0.1:8089/services/server/info, errcode=unknown, reason=Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/connection.py", line 175, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/util/connection.py", line 95, in create_connection raise err File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/connectionpool.py", line 723, in urlopen chunked=chunked, File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/connectionpool.py", line 404, in _make_request self._validate_conn(conn) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/connectionpool.py", line 1061, in _validate_conn conn.connect() File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/connection.py", line 363, in connect self.sock = conn = self._new_conn() File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/connection.py", line 187, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f48c2a95e50>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: ~~~ Concern Point #1 It seems that the error has been resolved by adding the following line to /opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/local/web.conf  (just changing the request destination from <local of the Heavy Forwarder server>​ to <IP address of the Search Head server> ) [settings] mgmtHostPort = <IP address of the Search Head server>:8089 However, I am now seeing the following log, and a 401 is being returned. The request destination is https://127.0.0.1:8089/servicesNS/nobody/Splunk_TA_microsoft-cloudservices/splunk_ta_mscs_settings?count=-1 Concern Point #2 I thought I could resolve Concern Point #1 in the same way by changing the request destination to the <IP address of the Search Head server> , but I don't know how to do that (I'm unsure if this approach is correct, so I would appreciate your guidance). splunk_ta_microsoft-cloudservices_azure_audit.log 2024-12-13 10:41:22,011 +0000 log_level=ERROR, pid=194872, tid=MainThread, file=config.py, func_name=log, code_line_no=66 | UCC Config Module: Fail to load endpoint "global_settings" - Unspecified internal server error. reason={"messages":[{"type":"ERROR","text":"Unexpected error \"<class 'splunktaucclib.rest_handler.error.RestError'>\" from python handler: \"REST Error [401]: Unauthorized -- call not properly authenticated\". See splunkd.log/python.log for more details."}]} File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/mscs_azure_audit.py", line 21, in <module> schema_para_list=("description",), File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_mod_input.py", line 232, in main log_suffix=log_suffix, File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_mod_input.py", line 130, in run tconfig = tc.create_ta_config(settings, config_cls or tc.TaConfig, log_suffix) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_config.py", line 228, in create_ta_config return config_cls(meta_config, settings, stanza_name, log_suffix) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_config.py", line 53, in __init__ self._load_task_configs() File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_config.py", line 75, in _load_task_configs config_handler = th.ConfigSchemaHandler(self._meta_config, self._client_schema) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_helper.py", line 95, in __init__ self._load_conf_contents() File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_helper.py", line 120, in _load_conf_contents self._all_conf_contents = self._config.load() File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/config.py", line 143, in load log(msg, level=logging.ERROR, need_tb=True) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/config.py", line 64, in log stack = "".join(traceback.format_stack()) NoneType: None ~~~ Supplementary Information The results of `curl` commands on the Heavy Forwarder server are as follows: curl -k https://<IP address of the Search Head server>:8089/services/server/info → 200 curl -k https://<IP address of the Indexer server>:8089/services/server/info → 200 curl -k https://127.0.0.1:8089/services/server/info → 401 (Unauthorized) curl -k https://<IP address of the Search Head server>/servicesNS/nobody/Splunk_TA_microsoft-cloudservices/splunk_ta_mscs_settings?count=-1 401 (Unauthorized) curl -k https://<IP address of the Indexer server>:8089/servicesNS/nobody/Splunk_TA_microsoft-cloudservices/splunk_ta_mscs_settings?count=-1 → 401 (Unauthorized) curl -k https://127.0.0.1:8089/servicesNS/nobody/Splunk_TA_microsoft-cloudservices/splunk_ta_mscs_settings?count=-1 → 401 (Unauthorized) If you need any further adjustments or specific aspects to focus on, please let me know!
Hello guys. Hope someone can help us out. I am using the Enterprise and am trying to store the events after CIM mapping (via Data Model) to the S3 bucket but it doesn't seem to be possible to be co... See more...
Hello guys. Hope someone can help us out. I am using the Enterprise and am trying to store the events after CIM mapping (via Data Model) to the S3 bucket but it doesn't seem to be possible to be configured on the Splunk side. My current approach is that I have created the Scheduled Search with Report and stream the results to the summary_index. Also I created Ingest Action to stream all incoming events from summary_index to the S3. The workflow works fine just expect we get the same raw events written to the S3 and what we want - is to have MAPPED events stored to S3. Do you know if/how we can stream mapped events from one index into another?   Some more details: The reason behind it is that raw event has nested collections that we would like to reconfigure before giving back to the user. That's why initial thought was to implement logic that: 1 Our_Service sends data to Splunk 2 Splunk performs needed mapping and send mapped data to S3 3 Our_Service queries the bucket to get that formatted data I was also trying to reuse the same tstats search as we do for the dashboard but eventually that becomes a table and won't show the events but rather table in statistics so the summary_index stays empty in that case. Any help is highly appreciated