All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

To ensure Splunk fully reindexes a file whenever the datestamp changes, consider using initCrcLength and crcSalt in your inputs.conf. The default CHECK_METHOD = modtime may not detect content changes... See more...
To ensure Splunk fully reindexes a file whenever the datestamp changes, consider using initCrcLength and crcSalt in your inputs.conf. The default CHECK_METHOD = modtime may not detect content changes if the file is overwritten with similar data. Including a unique timestamp in the file or path can also help.        
Hi All We have a requirement where user needs to send mail a dashboard periodically. The Dashboard is made using Dashboard studio so the Export is available, I configured the export option and sent ... See more...
Hi All We have a requirement where user needs to send mail a dashboard periodically. The Dashboard is made using Dashboard studio so the Export is available, I configured the export option and sent a mail but the PDF output showing no data on individual panels, it gives the output while the panel are searching for the result. The dashboard has  time picker in it, no matter which value I set it ( last 4 hours to last 30 days) the result is same. Has anybody faced the issue similarly, have any workaround is there for this.   Please help.
i am facing same issue
When we delete a row in a csv lookup file, it gets deleted for that moment. But on saving, that row re-appears. Looks like a bug in latest version 4.0.5, working perfectly fine in 4.0.4 version. Upgr... See more...
When we delete a row in a csv lookup file, it gets deleted for that moment. But on saving, that row re-appears. Looks like a bug in latest version 4.0.5, working perfectly fine in 4.0.4 version. Upgrading to 4.0.5 because of vulnerabilities in 4.0.4. Anyone noticed  this issue?
@tah7004  To use ingest-time lookup, the field you want to apply must be specified as an indexed-field. You can apply it successfully by configuring the configuration file as follows. 1. $SPLUNK_HOM... See more...
@tah7004  To use ingest-time lookup, the field you want to apply must be specified as an indexed-field. You can apply it successfully by configuring the configuration file as follows. 1. $SPLUNK_HOME/etc/apps/myapp/lookups/test.csv field1,field2,field3 value1,value2,value3 2. $SPLUNK_HOME/etc/apps/myapp/local/props.conf [test_ingest_lookup] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = true TRANSFORMS-ingest_time_lookup = regex_extract_av_pairs, lookup_extract   3. $SPLUNK_HOME/etc/apps/myapp/local/transforms.conf [regex_extract_av_pairs] SOURCE_KEY = _raw REGEX = \s([a-zA-Z][a-zA-Z0-9-]+)=([^\s"',]+) REPEAT_MATCH = true FORMAT = $1::"$2" WRITE_META = true [lookup_extract] INGEST_EVAL= field3=json_extract(lookup("test.csv", json_object("field1", new_field, "field2", field2), json_array("field3")),"field3")   You can refer to another solution using INDEXED_EXTRACTIONS=json in the link below. - Splunkデータ取り込み時の絞り込み方法(リストマッチ) https://qiita.com/chobiyu/items/aec5ef3a75a8bab96546
Splunk as a software running on top of the OS doesn't have any privilege to choose between the swap and real memory as it's purely decided by the OS. There used to be many swap issues in Linux whic... See more...
Splunk as a software running on top of the OS doesn't have any privilege to choose between the swap and real memory as it's purely decided by the OS. There used to be many swap issues in Linux which could be better addressed or explained by the Vendor Support. Frequent swap access could impact the Splunk performance negatively - you may want to control 'swappiness' with the help of OS admin. https://www.techtarget.com/searchdatacenter/definition/Linux-swappiness  FYI.
Hi, I recently created a dash studio dashboard and I see while creating the dashboard the dashboard title and widget title are in one font format but once I finished my dashboard and shared it publi... See more...
Hi, I recently created a dash studio dashboard and I see while creating the dashboard the dashboard title and widget title are in one font format but once I finished my dashboard and shared it publicly I get one public URL which I see has a different font format when opened.  First snap is having the normal font format on which I created the dashboard Normal dashboard font format Once I open the shared URL the font looks like as below. Please help on how to restore it to original font.    Dashboard opened via shared URL  
Hi After updating to version 8.x, do I need to create new indexes? Please advise. Is there any documentation for this? @inveinvestigation #index
@sreeranjan wrote: We are currently working with the Splunk Enterprise product. The client has informed us that we will be transitioning to Splunk Cloud. From what I understand, Splunk Cloud r... See more...
@sreeranjan wrote: We are currently working with the Splunk Enterprise product. The client has informed us that we will be transitioning to Splunk Cloud. From what I understand, Splunk Cloud refers to the Splunk Cloud Platform, where the entire infrastructure is hosted and managed by Splunk on AWS. Even though it runs on AWS, it's still referred to as Splunk Cloud—not AWS Cloud—since the architecture and services are maintained by Splunk. Is that correct? It’s exactly this way. Usually when we are talking about splunk cloud it means just splunk core platform in cloud. That cloud can be in aws, azure or gcp. Then there are classic and Victoria experiences over it. This user point of view this means which kind of options it have e.g. for deployment apps etc. you can see those from splunk cloud description from docs.splun.com. With SCP your could expand your environment with edge or ingest processor which helps you with data ingestion configurations.
We have installed Splunk in windows and we want to send windows logs from Search Head, LM and CM to 3rd party using an indexer, somehow those logs can be seen in Search head queries but indexer is no... See more...
We have installed Splunk in windows and we want to send windows logs from Search Head, LM and CM to 3rd party using an indexer, somehow those logs can be seen in Search head queries but indexer is not forwarding them to 3rd party.
We are currently working with the Splunk Enterprise product. The client has informed us that we will be transitioning to Splunk Cloud. From what I understand, Splunk Cloud refers to the Splunk Clo... See more...
We are currently working with the Splunk Enterprise product. The client has informed us that we will be transitioning to Splunk Cloud. From what I understand, Splunk Cloud refers to the Splunk Cloud Platform, where the entire infrastructure is hosted and managed by Splunk on AWS. Even though it runs on AWS, it's still referred to as Splunk Cloud—not AWS Cloud—since the architecture and services are maintained by Splunk. Is that correct?
Thats great @Numb78  Glad you were able to get it sorted  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your... See more...
Thats great @Numb78  Glad you were able to get it sorted  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@smahoney In order to set a refresh, you must specify the refresh option on your search by adding "refresh: 3s" to your underlying search options to refresh every 3 seconds. REFRESH is a setting that... See more...
@smahoney In order to set a refresh, you must specify the refresh option on your search by adding "refresh: 3s" to your underlying search options to refresh every 3 seconds. REFRESH is a setting that goes with Data Sources, not Visualizations. REFRESH can be set on individual datasources or can be set as a DEFAULT for all datasources . https://docs.splunk.com/Documentation/Splunk/9.4.2/DashStudio/Default#Set_defaults_by_data_source_or_visualization_type If this Helps, Please Upvote!!
That works, but I can't set refresh per panel from what I see in the documentation.
hi @Sarvesh_Fenix  The 429 error you're seeing is might be due to the  Graph throttling. Microsoft limits users to approximately 15 queries per 5-second window. Try to Increase your polling interv... See more...
hi @Sarvesh_Fenix  The 429 error you're seeing is might be due to the  Graph throttling. Microsoft limits users to approximately 15 queries per 5-second window. Try to Increase your polling interval in the Azure add-on configuration Split your subscription monitoring into separate inputs The Resource Graph approach you're currently using will continue to hit these limits. Microsoft's documentation (https://docs.microsoft.com/en-us/azure/governance/resource-graph/concepts/guidance-for-throttled-requests) recommends implementing pagination, staggering requests, and proper retry logic. Check your Azure application permissions as well - you'll need AuditLog.Read.All and Directory.Read.All for SignIn logs. If this helps, Please Upvote
Hello Splunkers,   Need help/reference to onboard  Azure SignIn logs to Splunk? i am trying with Splunk Add for Microsoft Azure (Splunk Add on for Microsoft Azure | Splunkbase) But unable to do so,... See more...
Hello Splunkers,   Need help/reference to onboard  Azure SignIn logs to Splunk? i am trying with Splunk Add for Microsoft Azure (Splunk Add on for Microsoft Azure | Splunkbase) But unable to do so,  getting below error: 2025-05-12 13:04:13,042 log_level=ERROR pid=319128 tid=MainThread file=base_modinput.py:log_error:317 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunktaucclib/modinput_wrapper/base_modinput.py", line 141, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure_metrics.py", line 141, in collect_events resources = az_resource_graph.get_resources_by_query(helper, access_token, query, subscription_id.split(","), environment, resources=[]) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/resource_graph.py", line 63, in get_resources_by_query raise e File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/resource_graph.py", line 48, in get_resources_by_query r.raise_for_status() File "/opt/splunk/etc/apps/TA-MS-AAD/lib/requests/models.py", line 1024, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2019-04-01 05-12-2025 18:25:07.429 +0000 ERROR TcpInputProc [1731515 FwdDataReceiverThread-0] - Error encountered for connection from src=127.0.0.1:42076. error:140890C7:SSL routines:ssl3_get_client_certificate:peer did not return a certificate   05-12-2025 13:04:13.134 +0000 ERROR ExecProcessor [326053 ExecProcessor] - message from "/opt/splunk/bin/python3.9 /opt/splunk/etc/apps/TA-MS-AAD/bin/azure_metrics.py" 429 Client Error: Too Many Requests for url: https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2019-04-01 - Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunktaucclib/modinput_wrapper/base_modinput.py", line 141, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure_metrics.py", line 141, in collect_events resources = az_resource_graph.get_resources_by_query(helper, access_token, query, subscription_id.split(","), environment, resources=[]) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/resource_graph.py", line 63, in get_resources_by_query raise e File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/resource_graph.py", line 48, in get_resources_by_query r.raise_for_status() File "/opt/splunk/etc/apps/TA-MS-AAD/lib/requests/models.py", line 1024, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2019-04-01 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunklib/modularinput/script.py", line 67, in run_script self.stream_events(self._input_definition, event_writer) File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunktaucclib/modinput_wrapper/base_modinput.py", line 150, in stream_events raise RuntimeError(str(e)) RuntimeError: 429 Client Error: Too Many Requests for url: https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2019-04-01   TIA,    
Reviving a dead thread but I believe your solution is precisely the same as @rocketboots_ser's, except using _time rather than a fixed time. In either case the changing of time zones doesn't affect t... See more...
Reviving a dead thread but I believe your solution is precisely the same as @rocketboots_ser's, except using _time rather than a fixed time. In either case the changing of time zones doesn't affect the outcome.  A more compact rewrite is this:  | eval time_UTC = strftime(2 * _time - strptime(strftime(_time, "%F %TZ"),"%F %T%Z") Which relies on the same tricking of strptime() into thinking the output of strftime() is in UTC with the %Z variable. Doesn't matter if you use _time or 2000-01-01 as long as you're consistent. 
@smahoney Did you already try this option with setting a global default for refresh in the code? https://docs.splunk.com/Documentation/Splunk/9.4.2/DashStudio/Default#Use_global_defaults ... See more...
@smahoney Did you already try this option with setting a global default for refresh in the code? https://docs.splunk.com/Documentation/Splunk/9.4.2/DashStudio/Default#Use_global_defaults If this Helps, Please Upvote!! Cheers! Sai
That's true as you are sending over pure TCP which is not s2s. Those fields are part of s2s' metadata information. If you want send also those you must add those into your data stream's payload part.... See more...
That's true as you are sending over pure TCP which is not s2s. Those fields are part of s2s' metadata information. If you want send also those you must add those into your data stream's payload part. You can use props.conf and transforms.conf to modify that as needed. But what you are actually trying to do and why and where you try to send that data? Maybe there are some other way to get events there? I see some OTEL name here....
One comment. If I remember correctly, you must 1st remove/move your SPLUNK_HOME/etc/passwd file into another name. It if exists then that user-seed.conf didn't work.