All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello everyone I want help on how to deal with the following problem A company that got hacked and we want to know how the hack happened and is there a data leak or not The company does not use an... See more...
Hello everyone I want help on how to deal with the following problem A company that got hacked and we want to know how the hack happened and is there a data leak or not The company does not use any of the EDR and sime and ndr systems Question The best way to extract logs from the company's systems and analyze them in splunk and what are the rules to start searching
@jotne thank you for your assistance,   i'm afraid that's will cause issues with my data inside the indexers already.   so is this affect the data stored or by any meaning the data it self?
I can see "i" which is the guid for indexer. Can check if that works. You would have to first manually note down the guid for each indexer and map which cluster it belongs to.
You would need to create a pool and assing indexers based on pool so get it by cluster.
What would be the storage requirement for SmartStore when rf is 2 for indexer cluster. Would it be double that of traditional for one indexer or only primary buckets would move to smartstore? Antyhi... See more...
What would be the storage requirement for SmartStore when rf is 2 for indexer cluster. Would it be double that of traditional for one indexer or only primary buckets would move to smartstore? Antyhing specifically mentioned in Splunk docs? (I have tried to find but did not see anything) Consider following scenarios: 1) 2 onprem Indexers with dedicated storage, rf is 2. Each indexer has 5 TB of data. So combined would be 10 TB 2) 4 indexers - 2 sites, 2 on each site. Each site will maintain 1 copy of bucket. Again combined storage would be 10 TB When migrating to SmartStore, what would be the expected storage utilization?
Thanks for the response. However, I need the license usage per cluster. That's provides the total license today. 
Hi @whitefang1726  may i know if you have checked the default license dashboards at the DMC (indexing---> License Usage --- > today or history)   Best Regards Sekar
This is a temporary index created for testing purposes. The index is deployed from master. How do I ensure that only my own site's data is retrieved?
I changed the metrics above and then couldn't save and display 500 error
Thanks for the help I see the logs now, I tried to use a different port to take the logs from syslog conf file. source s_network { udp(port(10514)); }; destination d_splunk { udp("localhost"... See more...
Thanks for the help I see the logs now, I tried to use a different port to take the logs from syslog conf file. source s_network { udp(port(10514)); }; destination d_splunk { udp("localhost" port(11514)); }; log { source(s_network); destination(d_splunk); }; For this now I see the logs...
I'm not sure what you're stuck with. Ideally, would need to see your current configurations and error messages to support. What configuration file(s) are you stuck with? Are your _internal logs re... See more...
I'm not sure what you're stuck with. Ideally, would need to see your current configurations and error messages to support. What configuration file(s) are you stuck with? Are your _internal logs reaching the Indexers? Are you getting any errors?
Hi, Have a look at these docs. On the DS make sure you've included noop for the CM serverclass.conf entry: [serverClass:<serverClassName>] stateOnClient = noop Also, ensure you're not overriding ... See more...
Hi, Have a look at these docs. On the DS make sure you've included noop for the CM serverclass.conf entry: [serverClass:<serverClassName>] stateOnClient = noop Also, ensure you're not overriding it on the app-level too. If you've already got this covered, can you share the error message please?
Hi, The wording is quite tricky but I will do my best to explain: 1) The maximum number of concurrent historical scheduled searches on this cluster has been reached: Platform-level. Essentially,  ... See more...
Hi, The wording is quite tricky but I will do my best to explain: 1) The maximum number of concurrent historical scheduled searches on this cluster has been reached: Platform-level. Essentially,  the Splunk platform has reached its concurrent search limit as defined by the concurrency settings in limits.conf. For example, if the limit is five, you might have six different searches all scheduled at once, triggering this error. 2) The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached This is defined on a per-search level under max_concurrent within savedsearches.conf (advanced search settings). It is one by default. This means that one particular search has been scheduled whilst previous instances of that same search is still running. So for example, you schedule a particular search to run every two minutes but it takes six minutes to run. Splunk starts it, and two minutes later, goes to schedule it for a second time but its still running, throwing the error. As a tip, 1) is typically caused by incorrectly scheduled searches all scheduled for the same time. Whereas 2) is caused by a particular search being scheduled too frequently or running for longer than expected, causing it to overlap with itself.
HI Splunk Answers,  Is there a way to get license count by Cluster Peer? Example if I have 3 splunk cluster, I need to get the license by cluster (location) and by index (sourcetype if possible). In... See more...
HI Splunk Answers,  Is there a way to get license count by Cluster Peer? Example if I have 3 splunk cluster, I need to get the license by cluster (location) and by index (sourcetype if possible). Internal logs doesn't include the indexer based on host (h). I'm thinking of different SPL query but no idea where I can get it. Need your help. thanks! 07-16-2024 21:14:52.451 -0500 INFO LicenseUsage - type=Usage s="test:source::/opt/splunk/var/log/test_2024-07-16.log" st="test-st" h=hosttest o="" idx="testidx" i="sadsadasdasdadasdasdasdasdasda" pool="auto_generated_pool_enterprise" b=503 poolsz=1234567891012
Hi, Quoting the docs: If the cluster is not in a valid state and the local site does not have a full complement of primaries (typically, because some peers on the site are down), remote peers also ... See more...
Hi, Quoting the docs: If the cluster is not in a valid state and the local site does not have a full complement of primaries (typically, because some peers on the site are down), remote peers also participate in the search, providing results from any primaries missing from peers local to the site. Looking at your diagram and search, am I correct in thinking that index=site01_* is only configured on site01 and index=site02_* is only configured on site02? If so, firstly this is a misconfiguration and bad practice. However, it makes sense that your search affinity is not working because you would not have a copy of the data in both sites. Only site02 would have data in site02_* and therefore index=site0* would return data from both sites! You should be managing your indexes via the cluster manager so that it's consistent. If you want different indexes per site, then you should be using a multi-cluster deployment. If you want to restrict access between sites and search heads then you can use RBAC and search filters. Search affinity is not designed to be a security control and should not be treated as such.
@Tom_Lundie what about the syslog configuration? what should I do with it?
Thank you so much for your help. Am new to Splunk and I want really bad to master it. I will go and check the config as you said and I will let you know. 
Hi All, I am working on skipped searches, what is the difference between below 2? 1) The maximum number of concurrent historical scheduled searches on this cluster has been reached 2) The maximum ... See more...
Hi All, I am working on skipped searches, what is the difference between below 2? 1) The maximum number of concurrent historical scheduled searches on this cluster has been reached 2) The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached  
Hi, It sounds like you've made great progress, nice one. There are multiple designs and opinions out there regarding getting syslog into Splunk. It's up to you to decide what's best. To get you st... See more...
Hi, It sounds like you've made great progress, nice one. There are multiple designs and opinions out there regarding getting syslog into Splunk. It's up to you to decide what's best. To get you started there are tools such as Splunk Connect For Syslog which provides an "all in one" feel, you can also use a syslog service such as rsyslog or syslog-ng to listen for your logs and cache them to disk and then forward them via a monitor stanza in inputs.conf. However, if you want Splunk to listen directly, here is an example inputs.conf that you can tweak for your deployment:   [udp://10514] disabled = false connection_host = ip sourcetype = <<firewall_product>> index = main   For sourcetype, look on Splunkbase for your firewall vendor to check if there is an appropriate TA that you can use for field extractions. For example palo-alto firewall would be pan_log.  For index, pick an appropriate index to suit your needs. Finally, inputs.conf can either be deployed within an app (recommended) or directly under /opt/splunk/etc/system/local/ Also, make sure that 10514 is permitted on the local firewall.
I'm trying to distribute an app from the deployment server to the index server via the cluster manager. In the cluster manager's deploymentclient.conf, it uses serverRepositoryLocationPolicy and re... See more...
I'm trying to distribute an app from the deployment server to the index server via the cluster manager. In the cluster manager's deploymentclient.conf, it uses serverRepositoryLocationPolicy and repositoryLocation to receive the app in $SPLUNK_HOME$/etc/manager-apps and pushes it to peer-apps on the index server for distribution. Distribution to the index server was successful, but an install error message appears in the deployment server's internal log. Is there a setting to prevent items distributed to manager-apps from being installed?