All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

  Hello everyone,    I try to follow this manual the https://docs.splunk.com/Documentation/StreamApp/7.2.0/DeployStreamApp/InstallStreamForwarderonindependentmachine  I face an issue below, once ... See more...
  Hello everyone,    I try to follow this manual the https://docs.splunk.com/Documentation/StreamApp/7.2.0/DeployStreamApp/InstallStreamForwarderonindependentmachine  I face an issue below, once I try to ssh and run the command on my linux vm.   
1. Don't just enable all Correlation Rules. You'll kill your ES installation 2. Try this to find the rule which creates your notables | rest /services/saved/searches | search action.notable.param.... See more...
1. Don't just enable all Correlation Rules. You'll kill your ES installation 2. Try this to find the rule which creates your notables | rest /services/saved/searches | search action.notable.param.rule_title="Access - * - Rule" | table title action.notable.param.rule_title action.notable.param.security_domain disabled eao:acl.app eai:acl.owner eai:acl.sharing |  
https://docs.splunk.com/Documentation/Splunk/latest/Indexer/AboutSmartStore#Features_not_supported_by_SmartStore Tsidx reduction. Do not set enableTsidxReduction to "true". Tsidx reduction modifies ... See more...
https://docs.splunk.com/Documentation/Splunk/latest/Indexer/AboutSmartStore#Features_not_supported_by_SmartStore Tsidx reduction. Do not set enableTsidxReduction to "true". Tsidx reduction modifies bucket contents and is not supported by SmartStore. Note: You can still search any existing buckets that were tsidx-reduced before migration to SmartStore. As with non-SmartStore deployments such searches will likely run slowly. See Reduce tsidx disk usage.
hello Splunkers , Need some clarification on Smartstore data migration. as per the docs , You can still search any existing buckets that were tsidx-reduced before migration to SmartStore.  e.g. we h... See more...
hello Splunkers , Need some clarification on Smartstore data migration. as per the docs , You can still search any existing buckets that were tsidx-reduced before migration to SmartStore.  e.g. we have 18 months of data retention. We need to keep 6 months of data in local/cache storage due to frequent audit/forensic searches that need raw data fields. questions: 1> is it possible to  migrate tsidx reduced buckets to obj store without need for rebuild & Indexer cluster  will still search them as normal ( slower) process for tsidx reduced buckets ? OR do we need to rebuild all buckets before initiating data migration to obj store. in our case then we need to rebuild all the buckets from 7 to 18 months! In some cases we may run out of local space if we have to do this 2> What is the performance impact to search reduced bucket with addition of smartstore. Since cache manager has to fetch bucket from remote store & then rebuild it locally in cache(?) before it being searchable , the two levels of performance hit is too much? Anyone have had such a situation. thanks for your attention Manduki     https://docs.splunk.com/Documentation/Splunk/latest/Indexer/AboutSmartStore Tsidx reduction. Do not set enableTsidxReduction to "true". Tsidx reduction modifies bucket contents and is not supported by SmartStore. Note: You can still search any existing buckets that were tsidx-reduced before migration to SmartStore. As with non-SmartStore deployments such searches will likely run slowly.
Yes, exactly, this is what I am surprised about, why does it add Access - login splunk - Rule although I did not modify the address is there a solution to this problem for me and I will be   ... See more...
Yes, exactly, this is what I am surprised about, why does it add Access - login splunk - Rule although I did not modify the address is there a solution to this problem for me and I will be   I activated every rule but still the same problem all the results categorize Threat   grateful to you
I'm tempted to say you're looking at a wrong correlation search. The one we're both looking into is a standard search defined in SA-AccessProtection called "Excessive Failed Logins", right? And it s... See more...
I'm tempted to say you're looking at a wrong correlation search. The one we're both looking into is a standard search defined in SA-AccessProtection called "Excessive Failed Logins", right? And it should produce a notable with a title "Excessive Failed Logins". But your notables have a title "Access - login splunk - Rule". It is most probably something created in your environment (even more so because splunk is spelled with lowercase "S" so it's definitely not something provided by Splunk.  
Hello @Michael.Mom, In addition to @MARTINA.MELIANA, if you would like to monitor the AWS RDS SQL Server instance hardware metrics you can install the machine agent on that server and you will be a... See more...
Hello @Michael.Mom, In addition to @MARTINA.MELIANA, if you would like to monitor the AWS RDS SQL Server instance hardware metrics you can install the machine agent on that server and you will be able to monitor hardware metrics. You may visit the below pages for more understanding. https://docs.appdynamics.com/appd/24.x/24.7/en/infrastructure-visibility https://docs.appdynamics.com/appd/24.x/24.7/en/infrastructure-visibility/overview-of-infrastructure-visibility https://docs.appdynamics.com/appd/24.x/24.7/en/infrastructure-visibility/hardware-resources-metrics Best Regards, Rajesh Ganapavarapu
I have the same settings, it categorizes all... Correlation with the value Threat    
1. Short answer - something is wrong. 2. Longer answer - you provided us with almost no info at all. Apart from the fact that it's kali linux, we have no idea what's going on. What's the actual erro... See more...
1. Short answer - something is wrong. 2. Longer answer - you provided us with almost no info at all. Apart from the fact that it's kali linux, we have no idea what's going on. What's the actual error, what was the full command, have you even downloaded the file...
1. Maybe someone tampered with your installation. This is from my lab with default settings: 2. Anyway, even if there was an error, the proper channel to report it is to create a Support case. T... See more...
1. Maybe someone tampered with your installation. This is from my lab with default settings: 2. Anyway, even if there was an error, the proper channel to report it is to create a Support case. This is a community-driven forum, not a support channel
While using Splunk ES, we noticed that correlation searches were set To an incorrect security field on the Incident Review page. This leads to inaccurate classifications of events Security and affe... See more...
While using Splunk ES, we noticed that correlation searches were set To an incorrect security field on the Incident Review page. This leads to inaccurate classifications of events Security and affects the decision-making process The first step is to set security Domain = Access The problem is that instead of being classified as security Domain = Access, it is classified as Theret, and so all cases are classified as Theret This causes us a problem with the values ​​not appearing on the Security Posture page  
Good day all,  I am new here and will be grateful and appreciative for all communication here towards achieving a great result to solving my issues.... I got stuck installing the Splunk Enterprises... See more...
Good day all,  I am new here and will be grateful and appreciative for all communication here towards achieving a great result to solving my issues.... I got stuck installing the Splunk Enterprises using the command copied from the official site, I will be grateful if this is resolved in no distant time... sudo dpkg -I splunk-9.2.2.... After inserting the command, it shows the below Error... dpkg: Error Pls help...
Ok. So you _probably_ should handle it completely differently. It has nothing to do with the index itself. Data format has changed which means you should use another sourcetype. In that new sourcetyp... See more...
Ok. So you _probably_ should handle it completely differently. It has nothing to do with the index itself. Data format has changed which means you should use another sourcetype. In that new sourcetype you should provide a compatibility layer for the old one - creating aliases, lookups and calculated fields to match old sourcetype's fields. It would be even better if your sourcetypes were CIM-compatible and you were searching from datamodels but I don't suppose that's the case.
My main goal is to go to all locations and make sure that the SPL that is being used along with index=<my_index> is actually doing what it should because the data changed from csv format to json. And... See more...
My main goal is to go to all locations and make sure that the SPL that is being used along with index=<my_index> is actually doing what it should because the data changed from csv format to json. And the field value pairs changed their naming convention. All because our environment chose to change the application it uses to gather said data. I have been tasked with this and I was not even the person who created all these searches/reports/alerts etc... So no tribal knowledge which led me to this forum. 
@PickleRickis correct in that this is a Hard Problem.  For further discussion on it, along with ways to (partially) solve it, see the .conf24 talk I co-produced at GitHub - TheWoodRanger/presentation... See more...
@PickleRickis correct in that this is a Hard Problem.  For further discussion on it, along with ways to (partially) solve it, see the .conf24 talk I co-produced at GitHub - TheWoodRanger/presentation-conf_24_audittrail_native_telemetry 
This is where the good documentation of the Splunk project will come in handy.  As @PickleRick suggested, this task will need the long route and manual work may give the headache.  thanks. 
Hello @Gustavo.Marconi , Yes, you should have the full rights to install the curl on the pod. Can you check the network connection by running curl after the curl installation ? Best Regards, Raj... See more...
Hello @Gustavo.Marconi , Yes, you should have the full rights to install the curl on the pod. Can you check the network connection by running curl after the curl installation ? Best Regards, Rajesh
I'm using Spunk Cloud Search & Reporting with Kubernetes 1.25 using Splunk OTel Collector 0.103.0.  I have kubernetes pods with multiple containers.  Most of the containers have their logs scraped an... See more...
I'm using Spunk Cloud Search & Reporting with Kubernetes 1.25 using Splunk OTel Collector 0.103.0.  I have kubernetes pods with multiple containers.  Most of the containers have their logs scraped and sent to the splunk index based on the 'splunk.com/index' namespace annotation; so normal Splunk OTEL Collector log scraping.  But one of the container's logs must go to a different index.  The pods that have a container whose logs must be routed differently have a pod annotation like 'splunk-index-{container_name}=index'.  I had this working in Splunk Connector for Kubernetes using this config;     ```yaml customFilters: # # filter that set's the splunk_index name from a container annotation # - The annotation is a concatenation of 'splunk-index-' and # the name of the container. If that annotation exists on # the container, then it is used as the splunk index name, # otherwise the default index is used. # - This is used in june-analytics and june-hippo-celery to route # some logs via an annotated sidecar that tails a log file from # the primary application container. # - This could be used by any container to specify it's splunk index. # SplunkIndexOverride: tag: tail.containers.** type: record_transformer body: |- enable_ruby <record> splunk_index ${record.dig("kubernetes", "annotations", "splunk-index-" + record["container_name"]) || record["splunk_index"]} </record> ```     My attempt to do this with Splunk OTEL collector uses following config in the values.yaml file for the Splunk OTEL collector v.103.0. Helm chart to add a processor to check for the annotation:     ```yaml agent: config: processors: # set the splunk index for the logs of a container whose pod is annotated with `splunk-index-{container_name}=index` transform/logs/analytics: error_mode: ignore log_statements: - context: log statements: - set(resource.attributes["com.splunk.index"], resource.attributes[Concat("splunk-index-", resource.attributes["container_name"], "")]) where resource.attributes[Concat("splunk-index-", resource.attributes["container_name"], "")] != nil ```      The splunk-otel-collector logs show this error: Error: invalid configuration: processors::transform/logs/analytics: unable to parse OTTL statement "set(resource.attributes[\"com.splunk.index\"], resource.attributes[Concat(\"splunk-index-\", resource.attributes[\"container_name\"], \"\")]) where resource.attributes[Concat(\"splunk-index-\", resource.attributes[\"container_name\"], \"\")] != nil": statement has invalid syntax: 1:65: unexpected token "[" (expected ")" Key*) It seems it does not like the use of Concat() to create a lookup key for attributes.  So how would I do this in Splunk OTEL Collector?
It ended up being the time format was wrong.  I had the month and day swapped.  Thanks all for chiming in.