All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hi @Sarvesh_Fenix  The 429 error you're seeing is might be due to the  Graph throttling. Microsoft limits users to approximately 15 queries per 5-second window. Try to Increase your polling interv... See more...
hi @Sarvesh_Fenix  The 429 error you're seeing is might be due to the  Graph throttling. Microsoft limits users to approximately 15 queries per 5-second window. Try to Increase your polling interval in the Azure add-on configuration Split your subscription monitoring into separate inputs The Resource Graph approach you're currently using will continue to hit these limits. Microsoft's documentation (https://docs.microsoft.com/en-us/azure/governance/resource-graph/concepts/guidance-for-throttled-requests) recommends implementing pagination, staggering requests, and proper retry logic. Check your Azure application permissions as well - you'll need AuditLog.Read.All and Directory.Read.All for SignIn logs. If this helps, Please Upvote
Hello Splunkers,   Need help/reference to onboard  Azure SignIn logs to Splunk? i am trying with Splunk Add for Microsoft Azure (Splunk Add on for Microsoft Azure | Splunkbase) But unable to do so,... See more...
Hello Splunkers,   Need help/reference to onboard  Azure SignIn logs to Splunk? i am trying with Splunk Add for Microsoft Azure (Splunk Add on for Microsoft Azure | Splunkbase) But unable to do so,  getting below error: 2025-05-12 13:04:13,042 log_level=ERROR pid=319128 tid=MainThread file=base_modinput.py:log_error:317 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunktaucclib/modinput_wrapper/base_modinput.py", line 141, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure_metrics.py", line 141, in collect_events resources = az_resource_graph.get_resources_by_query(helper, access_token, query, subscription_id.split(","), environment, resources=[]) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/resource_graph.py", line 63, in get_resources_by_query raise e File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/resource_graph.py", line 48, in get_resources_by_query r.raise_for_status() File "/opt/splunk/etc/apps/TA-MS-AAD/lib/requests/models.py", line 1024, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2019-04-01 05-12-2025 18:25:07.429 +0000 ERROR TcpInputProc [1731515 FwdDataReceiverThread-0] - Error encountered for connection from src=127.0.0.1:42076. error:140890C7:SSL routines:ssl3_get_client_certificate:peer did not return a certificate   05-12-2025 13:04:13.134 +0000 ERROR ExecProcessor [326053 ExecProcessor] - message from "/opt/splunk/bin/python3.9 /opt/splunk/etc/apps/TA-MS-AAD/bin/azure_metrics.py" 429 Client Error: Too Many Requests for url: https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2019-04-01 - Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunktaucclib/modinput_wrapper/base_modinput.py", line 141, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure_metrics.py", line 141, in collect_events resources = az_resource_graph.get_resources_by_query(helper, access_token, query, subscription_id.split(","), environment, resources=[]) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/resource_graph.py", line 63, in get_resources_by_query raise e File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/resource_graph.py", line 48, in get_resources_by_query r.raise_for_status() File "/opt/splunk/etc/apps/TA-MS-AAD/lib/requests/models.py", line 1024, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2019-04-01 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunklib/modularinput/script.py", line 67, in run_script self.stream_events(self._input_definition, event_writer) File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunktaucclib/modinput_wrapper/base_modinput.py", line 150, in stream_events raise RuntimeError(str(e)) RuntimeError: 429 Client Error: Too Many Requests for url: https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2019-04-01   TIA,    
Reviving a dead thread but I believe your solution is precisely the same as @rocketboots_ser's, except using _time rather than a fixed time. In either case the changing of time zones doesn't affect t... See more...
Reviving a dead thread but I believe your solution is precisely the same as @rocketboots_ser's, except using _time rather than a fixed time. In either case the changing of time zones doesn't affect the outcome.  A more compact rewrite is this:  | eval time_UTC = strftime(2 * _time - strptime(strftime(_time, "%F %TZ"),"%F %T%Z") Which relies on the same tricking of strptime() into thinking the output of strftime() is in UTC with the %Z variable. Doesn't matter if you use _time or 2000-01-01 as long as you're consistent. 
@smahoney Did you already try this option with setting a global default for refresh in the code? https://docs.splunk.com/Documentation/Splunk/9.4.2/DashStudio/Default#Use_global_defaults ... See more...
@smahoney Did you already try this option with setting a global default for refresh in the code? https://docs.splunk.com/Documentation/Splunk/9.4.2/DashStudio/Default#Use_global_defaults If this Helps, Please Upvote!! Cheers! Sai
That's true as you are sending over pure TCP which is not s2s. Those fields are part of s2s' metadata information. If you want send also those you must add those into your data stream's payload part.... See more...
That's true as you are sending over pure TCP which is not s2s. Those fields are part of s2s' metadata information. If you want send also those you must add those into your data stream's payload part. You can use props.conf and transforms.conf to modify that as needed. But what you are actually trying to do and why and where you try to send that data? Maybe there are some other way to get events there? I see some OTEL name here....
One comment. If I remember correctly, you must 1st remove/move your SPLUNK_HOME/etc/passwd file into another name. It if exists then that user-seed.conf didn't work.
I have explain this and also searchable buckets in this post https://community.splunk.com/t5/Deployment-Architecture/SF-and-RF-How-much-count-should-we-keep/m-p/580574/highlight/true#M25165. You did... See more...
I have explain this and also searchable buckets in this post https://community.splunk.com/t5/Deployment-Architecture/SF-and-RF-How-much-count-should-we-keep/m-p/580574/highlight/true#M25165. You didn't say if you have normal cluster in one site or multisite cluster have you a smart store in use If You have smart store then you must use SF=RF, otherwise it's possible that spunk try to upload your new bucket from replicate bucket which is not searchable and then it didn't work with S2. If you have multisite cluster then you will have also site replication factor and site search factor which manages  those copies over your sites. As @PickleRick already said your installation arise questions as you said that you have 9 indexers and 7 search head. I need to say that it's quite weird combination especially if those SHs are individual and not in SHC.
Thank you for all the input here. I was really getting caught up in the capture group without realizing that wasn't what I was even trying to figure out.
Yeah, I've been looking into data models and figuring out how to set my eventtypes to set up CIM, that's kinda how I fell down this particular rabbit hole.   
Good afternoon @ljvc. Could you provide some direction on how you're accessing the mc_notes collection from within the Mission Control app? Struggling to find this.
So if your indexers have separate storage filesystem for indexes consider pre upgrade creating the links ln -s /mypath-to/mongo /opt/splunk/var/lib/splunk/kvstore/mongo for a headach free kvstore upd... See more...
So if your indexers have separate storage filesystem for indexes consider pre upgrade creating the links ln -s /mypath-to/mongo /opt/splunk/var/lib/splunk/kvstore/mongo for a headach free kvstore update    
Hello! I realize this is bumping an extremely old thread, but it was still relevant. I went to use this and it looks like it completely ignores the "Domain Users" group. If a user is a member of two... See more...
Hello! I realize this is bumping an extremely old thread, but it was still relevant. I went to use this and it looks like it completely ignores the "Domain Users" group. If a user is a member of two or more groups it doesn't create row for it in the memberOf row. If the account is ONLY a member of the "Domain Users" group it doesn't even show the memberOf column. This seems to be the only group it happens with, any standard "Built-In" group from AD shows up except for Domain Users. Initially I thought it had to do with spaces but groups with spaces show up fine so not sure what is happening here.
I also hit an upgrade bug with 9.4.1 on a clients indexers , the upgrade migration mongo 4-7 failed to run due to the scripts not using SPLUNK_DB but hardcoding /opt/splunk/var/lib.... The indexers h... See more...
I also hit an upgrade bug with 9.4.1 on a clients indexers , the upgrade migration mongo 4-7 failed to run due to the scripts not using SPLUNK_DB but hardcoding /opt/splunk/var/lib.... The indexers had a separate filesystem /data01/,,,, I was able to create a link from the mongo under /opt/splunk/var/lib/splunk/kvstore... to the "real" one in /data01 and restart triggering the upgrade process to complete properly ....
Just be aware - it is not an official download link (Splunk doesn't support and officially share such old products). It might stop working at any time.
As @richgalloway already said there are many different products not only one which you are talking about. I suppose that your best option is to contact your local Splunk sales engineer or splunk part... See more...
As @richgalloway already said there are many different products not only one which you are talking about. I suppose that your best option is to contact your local Splunk sales engineer or splunk partner and they could go through that offering to you. Then it's much easier to select correct options to your client.
Sourcetype is the "kind" of messages you get. It's not about what is contained within those events but how it's represented. If you want to have a nice and easy way of searching for similar "meaning... See more...
Sourcetype is the "kind" of messages you get. It's not about what is contained within those events but how it's represented. If you want to have a nice and easy way of searching for similar "meaning" events you can use tags or eventtypes. And might want to dig into datamodels.
The name of splunk slack channel has changed, but you can access it with URL given by @ITWhisperer's. I suppose that you could was reactivation for your current account by http://splk.it/slack. There... See more...
The name of splunk slack channel has changed, but you can access it with URL given by @ITWhisperer's. I suppose that you could was reactivation for your current account by http://splk.it/slack. There haven't been many people to manage those requests, so you must prepare to wait some time.
It doesnt appear this is a feature.  Tried all the existing solutions but they are all old and none of them work with 2025 Dashboard Studio.
The link above is out of date.  The current link is: https://download.splunk.com/products/universalforwarder/releases/6.4.6/windows/splunkforwarder-6.4.6-6635aa31e851-x86-release.msi
I think perhaps there's some mix-up in terminology that is making it harder to communicate the goal. Splunk Enterprise is Splunk's core data platform product for on-premises installation.  It can be... See more...
I think perhaps there's some mix-up in terminology that is making it harder to communicate the goal. Splunk Enterprise is Splunk's core data platform product for on-premises installation.  It can be used to collect observability (o11y) data. Splunk Cloud (AKA Splunk Cloud Platform) essentially is Splunk Enterprise on a public cloud provider (AWS, GCP, or Azure). Splunk Observability Cloud is Splunk's o11y product offering and is distinct from both Splunk Enterprise and Splunk Cloud.  This product is available only in a cloud offering. Splunk Real User Monitoring (RUM) and Splunk Synthetic Monitoring are other separate Splunk products. That said, can you please re-state the goal?