All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What does that even mean? _indextime is not calculated, it is the time when the event was indexed. It is like saying what is the average hour of the day? Earliest and latest relate to _time not _ind... See more...
What does that even mean? _indextime is not calculated, it is the time when the event was indexed. It is like saying what is the average hour of the day? Earliest and latest relate to _time not _indextime. Usually, _indextime is after _time as it takes time for the event to be logged, transmitted, parsed and indexed. Having said that, _time usually comes from the data in the event, which could be in the future as far as the event is concerned. Please explain what your goal is in more detail.
hi is anybody could give me a search to calculate the _indextime average for my events once it's done, what i have to do in the cron parameters of my alert to take into account this metric? thanks
Hello everyone, im new in Splunk and still need a lot to know. I want to ask question, how to forward data in JSON format from Netscout to Splunk? Should i use Univ Forwarder or maybe App on SplunkB... See more...
Hello everyone, im new in Splunk and still need a lot to know. I want to ask question, how to forward data in JSON format from Netscout to Splunk? Should i use Univ Forwarder or maybe App on SplunkBase? Thanks for the attention #Netscout #JSON
The best way of getting data from the company's systems is generally whatever is the easiest to get them out. Splunk can ingest data in many ways, but there are many standard ways of looking at data.... See more...
The best way of getting data from the company's systems is generally whatever is the easiest to get them out. Splunk can ingest data in many ways, but there are many standard ways of looking at data. What systems do you have and what logs are available. Do you currently use Splunk?  
Hello everyone I want help on how to deal with the following problem A company that got hacked and we want to know how the hack happened and is there a data leak or not The company does not use an... See more...
Hello everyone I want help on how to deal with the following problem A company that got hacked and we want to know how the hack happened and is there a data leak or not The company does not use any of the EDR and sime and ndr systems Question The best way to extract logs from the company's systems and analyze them in splunk and what are the rules to start searching
@jotne thank you for your assistance,   i'm afraid that's will cause issues with my data inside the indexers already.   so is this affect the data stored or by any meaning the data it self?
I can see "i" which is the guid for indexer. Can check if that works. You would have to first manually note down the guid for each indexer and map which cluster it belongs to.
You would need to create a pool and assing indexers based on pool so get it by cluster.
What would be the storage requirement for SmartStore when rf is 2 for indexer cluster. Would it be double that of traditional for one indexer or only primary buckets would move to smartstore? Antyhi... See more...
What would be the storage requirement for SmartStore when rf is 2 for indexer cluster. Would it be double that of traditional for one indexer or only primary buckets would move to smartstore? Antyhing specifically mentioned in Splunk docs? (I have tried to find but did not see anything) Consider following scenarios: 1) 2 onprem Indexers with dedicated storage, rf is 2. Each indexer has 5 TB of data. So combined would be 10 TB 2) 4 indexers - 2 sites, 2 on each site. Each site will maintain 1 copy of bucket. Again combined storage would be 10 TB When migrating to SmartStore, what would be the expected storage utilization?
Thanks for the response. However, I need the license usage per cluster. That's provides the total license today. 
Hi @whitefang1726  may i know if you have checked the default license dashboards at the DMC (indexing---> License Usage --- > today or history)   Best Regards Sekar
This is a temporary index created for testing purposes. The index is deployed from master. How do I ensure that only my own site's data is retrieved?
I changed the metrics above and then couldn't save and display 500 error
Thanks for the help I see the logs now, I tried to use a different port to take the logs from syslog conf file. source s_network { udp(port(10514)); }; destination d_splunk { udp("localhost"... See more...
Thanks for the help I see the logs now, I tried to use a different port to take the logs from syslog conf file. source s_network { udp(port(10514)); }; destination d_splunk { udp("localhost" port(11514)); }; log { source(s_network); destination(d_splunk); }; For this now I see the logs...
I'm not sure what you're stuck with. Ideally, would need to see your current configurations and error messages to support. What configuration file(s) are you stuck with? Are your _internal logs re... See more...
I'm not sure what you're stuck with. Ideally, would need to see your current configurations and error messages to support. What configuration file(s) are you stuck with? Are your _internal logs reaching the Indexers? Are you getting any errors?
Hi, Have a look at these docs. On the DS make sure you've included noop for the CM serverclass.conf entry: [serverClass:<serverClassName>] stateOnClient = noop Also, ensure you're not overriding ... See more...
Hi, Have a look at these docs. On the DS make sure you've included noop for the CM serverclass.conf entry: [serverClass:<serverClassName>] stateOnClient = noop Also, ensure you're not overriding it on the app-level too. If you've already got this covered, can you share the error message please?
Hi, The wording is quite tricky but I will do my best to explain: 1) The maximum number of concurrent historical scheduled searches on this cluster has been reached: Platform-level. Essentially,  ... See more...
Hi, The wording is quite tricky but I will do my best to explain: 1) The maximum number of concurrent historical scheduled searches on this cluster has been reached: Platform-level. Essentially,  the Splunk platform has reached its concurrent search limit as defined by the concurrency settings in limits.conf. For example, if the limit is five, you might have six different searches all scheduled at once, triggering this error. 2) The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached This is defined on a per-search level under max_concurrent within savedsearches.conf (advanced search settings). It is one by default. This means that one particular search has been scheduled whilst previous instances of that same search is still running. So for example, you schedule a particular search to run every two minutes but it takes six minutes to run. Splunk starts it, and two minutes later, goes to schedule it for a second time but its still running, throwing the error. As a tip, 1) is typically caused by incorrectly scheduled searches all scheduled for the same time. Whereas 2) is caused by a particular search being scheduled too frequently or running for longer than expected, causing it to overlap with itself.
HI Splunk Answers,  Is there a way to get license count by Cluster Peer? Example if I have 3 splunk cluster, I need to get the license by cluster (location) and by index (sourcetype if possible). In... See more...
HI Splunk Answers,  Is there a way to get license count by Cluster Peer? Example if I have 3 splunk cluster, I need to get the license by cluster (location) and by index (sourcetype if possible). Internal logs doesn't include the indexer based on host (h). I'm thinking of different SPL query but no idea where I can get it. Need your help. thanks! 07-16-2024 21:14:52.451 -0500 INFO LicenseUsage - type=Usage s="test:source::/opt/splunk/var/log/test_2024-07-16.log" st="test-st" h=hosttest o="" idx="testidx" i="sadsadasdasdadasdasdasdasdasda" pool="auto_generated_pool_enterprise" b=503 poolsz=1234567891012
Hi, Quoting the docs: If the cluster is not in a valid state and the local site does not have a full complement of primaries (typically, because some peers on the site are down), remote peers also ... See more...
Hi, Quoting the docs: If the cluster is not in a valid state and the local site does not have a full complement of primaries (typically, because some peers on the site are down), remote peers also participate in the search, providing results from any primaries missing from peers local to the site. Looking at your diagram and search, am I correct in thinking that index=site01_* is only configured on site01 and index=site02_* is only configured on site02? If so, firstly this is a misconfiguration and bad practice. However, it makes sense that your search affinity is not working because you would not have a copy of the data in both sites. Only site02 would have data in site02_* and therefore index=site0* would return data from both sites! You should be managing your indexes via the cluster manager so that it's consistent. If you want different indexes per site, then you should be using a multi-cluster deployment. If you want to restrict access between sites and search heads then you can use RBAC and search filters. Search affinity is not designed to be a security control and should not be treated as such.
@Tom_Lundie what about the syslog configuration? what should I do with it?