All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The best options is to define your use cases and based on those remove unused values before indexing events into disk. But this leads you a situation when you realize a new use case then you must upd... See more...
The best options is to define your use cases and based on those remove unused values before indexing events into disk. But this leads you a situation when you realize a new use case then you must update your indexing definitions to get a new values into splunk.  One thing what you could look is to check that those events don’t contain same information twice or even more times. This can happen when you have some code on your data and then the same information has added as a clear text. A good example is Windows event logs where this happens. There are also some other cases what you could do like remove additional formatting like json objects contain additional spaces remove unnecessary line breaks check if you could utilize metrics indexes for some data instead of putting everything in event indexes 
Hello Everyone, I'm trying to create an app in Splunk-SOAR version 6.4.0.92, using miminal code but keeps getting this error 'str' object has no attribute 'get'  -  when I try to install it on the ap... See more...
Hello Everyone, I'm trying to create an app in Splunk-SOAR version 6.4.0.92, using miminal code but keeps getting this error 'str' object has no attribute 'get'  -  when I try to install it on the apps section of Splunk-SOAR dashboard.  Can anyone help with this please Error Message app.json    
This depends on what changes you are deploying. Some needs restart and son don’t. You can found more information from docs.splunk.com.  Here is a basic information of clustering https://docs.spl... See more...
This depends on what changes you are deploying. Some needs restart and son don’t. You can found more information from docs.splunk.com.  Here is a basic information of clustering https://docs.splunk.com/Documentation/Splunk/9.4.1/Indexer/Basicclusterarchitecture You should read and try to understand what it means. Unfortunately that doc doesn’t tell everything as it takes too much space and to be honest most of us don’t need to know all those details.
@isoutamo @PickleRick so everytime I push configuration bundle from CM to indexers, rolling restart will be happened everytime for indexers? 
Hi @g_cremin  Are you able to share your code, please? This error occurs when your Python code is attempting to use the .get() method on a variable that holds a string value. The .get() method is d... See more...
Hi @g_cremin  Are you able to share your code, please? This error occurs when your Python code is attempting to use the .get() method on a variable that holds a string value. The .get() method is designed for dictionaries to retrieve values associated with keys, not for strings.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Trying to create an app in Splunk-SOAR version 6.4.0.92, using miminal code but keeps getting this error 'str' object has no attribute 'get'  - 
I wouldn't call it broken. Short of rebalancing buckets between restarting each single indexer (which is completely ridiculous) you can't make sure that every bucket is searchable throughout the roll... See more...
I wouldn't call it broken. Short of rebalancing buckets between restarting each single indexer (which is completely ridiculous) you can't make sure that every bucket is searchable throughout the rolling restart process. If you have RF=SF>=2, then it's all just a matter of reassigning primaries and maybe restarting only one indexer at a time (which can be a long process if you have a big setup). But what if you have RF=SF=1? (yes, I've seen such setups). So yes, it's a bit frustrating but I wouldn't call it broken.
Probably most important thing is that rolling restarts affects your searches every time when this happens even there is options to avoid it. The most important thing is that it affects all alerts, re... See more...
Probably most important thing is that rolling restarts affects your searches every time when this happens even there is options to avoid it. The most important thing is that it affects all alerts, reports etc which are running when indexers or SH in SHC are restarted by rolling restart. The implementation is somehow broken still. There are ideas to fix this in ideas.splunk.com, but no estimates if splunk have plan and capabilities to fix it.
understood, would you happen to have any advice on cleaning a big index?
I accessed the page below, registered with my information, and when I clicked the email button, I received the error shown in the image. https://www.splunk.com/en_us/download/splunk-cloud.html Now ... See more...
I accessed the page below, registered with my information, and when I clicked the email button, I received the error shown in the image. https://www.splunk.com/en_us/download/splunk-cloud.html Now I can't even access the Splunk website because this is what I see:     I'm from Brazil, if that helps in any way. So, what should I do? __________________________________________________________________   UPDATE:   Apparently this is a Chrome browser issue, as I was able to log in and out multiple times in Microsoft Edge without any problems! From there, I can start my free trial! So I guess the solution is to change browsers!  
You need to unmount "/opt/splunk/var/lib/splunk/kvstore/mongo" folder. Eg. in docker-compose volumes: - "/home/docker_volumes/etc:/opt/splunk/etc" - "/home/docker_volumes/var:/opt/splunk/var" - ... See more...
You need to unmount "/opt/splunk/var/lib/splunk/kvstore/mongo" folder. Eg. in docker-compose volumes: - "/home/docker_volumes/etc:/opt/splunk/etc" - "/home/docker_volumes/var:/opt/splunk/var" - "/opt/splunk/var/lib/splunk/kvstore/mongo"
I assume app should be [install]  state = disable or disabled ?
MC doesn't normally directly monitor forwarders. It can do indirect monitoring by checking their logs in _internal index. Sometimes people add HFs to MC with indexer role but AFAIR it causes false a... See more...
MC doesn't normally directly monitor forwarders. It can do indirect monitoring by checking their logs in _internal index. Sometimes people add HFs to MC with indexer role but AFAIR it causes false alerts since HFs don't actually do indexing.
Hi @livehybrid Thank you for your answer, but it didn't solve my problem unfortunately. I'm currently on a On-prem enviroment, and the workaround that i found was to put the verify parameter (t... See more...
Hi @livehybrid Thank you for your answer, but it didn't solve my problem unfortunately. I'm currently on a On-prem enviroment, and the workaround that i found was to put the verify parameter (this one directly in the curl.py) to false. line 99 r = requests.post(uri,data=payload,verify=False,cert=cert,headers=headers,timeout=timeout) Maybe not the best, but it's working.
How we can get the health status of the HF,UF and IHF which are connected to DS while using the rest am able to see the health for the MC ,CM, LM,DS, Deployer and IDX etc but not able to get the stat... See more...
How we can get the health status of the HF,UF and IHF which are connected to DS while using the rest am able to see the health for the MC ,CM, LM,DS, Deployer and IDX etc but not able to get the status health which is in Red Yellow green and not getting . Rest which am using is - | rest /services/server/health on MC am able to see health status of  MC ,CM, LM,DS, Deployer and IDX but not for forwarders also while am running the same query opening any of the HF U.I am able to see there health results.
Old thread, but leaving it here in hope other people might chip in.  Also seen the same WARNs, at a time we suffered from KV Store consuming a whole lot of CPU and wiredTigerCacheSizeGB, over 15 tim... See more...
Old thread, but leaving it here in hope other people might chip in.  Also seen the same WARNs, at a time we suffered from KV Store consuming a whole lot of CPU and wiredTigerCacheSizeGB, over 15 times the sum of any collection. Opening a Splunk Support case we were told: Splunk does not use KV Store to manage or store the history of scheduled searches. Scheduled searches are managed and tracked via internal logs, dispatch directories, and internal indexes, not KV Store However, we occasionally see those events on Search Heads, and frequently on Heavy Forwarders on role of parsing and routing, not connected to License Manager and disabled KV Store.
Hi @livehybrid  We have updated the launcher version and the install build in app.conf as part of the latest release of the TA-ipqualityscore app. However, during the upgrade process (from a previou... See more...
Hi @livehybrid  We have updated the launcher version and the install build in app.conf as part of the latest release of the TA-ipqualityscore app. However, during the upgrade process (from a previous version to the latest), we're encountering the following error in splunkd.log:   04-04-2025 14:32:37.142 +0530 ERROR ChunkedExternProcessor [209108 ChunkedExternProcessorStderrLogger] - stderr: ImportError: cannot import name 'IPQualityScoreClient' from 'ipqualityscoreclient' (/opt/splunk/etc/apps/TA-ipqualityscore/bin/ipqualityscoreclient/__init__.py) This issue seems to occur only during an upgrade. When the app is installed as a fresh install from Splunkbase, it works without any errors.  Could you please assist in identifying the root cause and recommend the appropriate fix?  
Hi @LukasO  Yes, you can use token authentication and SSO authentication together   If you want to create tokens for SSO users, you will need to set up attribute query requests (AQR) or authenticat... See more...
Hi @LukasO  Yes, you can use token authentication and SSO authentication together   If you want to create tokens for SSO users, you will need to set up attribute query requests (AQR) or authentication extensions. Alternatively, you can create local Splunk users and generate tokens for those users. You can get to the token creation page at https://YourSplunkInstance/en-US/manager/search/authorization/tokens   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
  Hello to the community, I try to query Splunk from an external SDK for which I am asking from our admins for a token authentication, but I am told that Splunk does not enable coexistence of both S... See more...
  Hello to the community, I try to query Splunk from an external SDK for which I am asking from our admins for a token authentication, but I am told that Splunk does not enable coexistence of both SSO (which is used now) and token-based authentication. A quick query to ChatGPT shows that this may be possible, but I'd like to have it confirmed. Could anyone confirm using/administering such a deployment?   B.r.   Lukas  
Great question! Let me clarify how tag enrichment works when ingesting AWS logs via Splunk's Data Manager: 1. CloudWatch Log Group Tags: When you ingest logs via Data Manager from CloudWatch Log Gro... See more...
Great question! Let me clarify how tag enrichment works when ingesting AWS logs via Splunk's Data Manager: 1. CloudWatch Log Group Tags: When you ingest logs via Data Manager from CloudWatch Log Groups, the AWS resource tags (attached directly to the log group) are not automatically appended to your log events in Splunk. Currently, Data Manager doesn't provide built-in functionality to automatically propagate AWS resource tags into the log events. Potential solution: If you need custom tags (env=, service=, custom=) in your log events ingested from CloudWatch, you'll need to enrich the logs within Splunk after ingestion. This could work: Implement tags within the logs themselves directly at the application logging layer (Lambda function code or ECS task logging output).  For Lambda logs, AWS CloudWatch does not automatically propagate resource tags directly into log events ingested by Data Manager. Similar to ECS, you'll need either: To add these tags within your Lambda function's logging statements explicitly. enrich them post-ingestion in Splunk using lookups or calculated fields