All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We don't know the whole picture so it's a bit difficult to give precise recommendation as to the license issue. There are at least two diferent solutions here - you could just attach your on-prem DS... See more...
We don't know the whole picture so it's a bit difficult to give precise recommendation as to the license issue. There are at least two diferent solutions here - you could just attach your on-prem DS to your LM in Azure (but would need to make sure there is proper connectivity of course) or you could obtain a "zero ingest" license from your local Splunk sales team for the DS alone. And you need to point your deployment clients'... deploymentclient.conf to the new DS. (if your infrastructure was well-designed you probably would just deploy a new version of a proper app from the old DS).
OK. You're _not_ waiting on the indexing queue so it doesn't seem to be the issue of backpressure from the disk not able to keep up with the rate of incoming events or any other configured outputs. ... See more...
OK. You're _not_ waiting on the indexing queue so it doesn't seem to be the issue of backpressure from the disk not able to keep up with the rate of incoming events or any other configured outputs. I'd start by checking the OS-level metrics (cpu usage, ram). If _nothing_ else changed "outside" (amount of events, their ingestion rate throughout the day - not only the general summarized license consumption over the whole day, composition of the ingest stream between (split among different sources, sourcetypes and so on)), something must have changed within your infrastructure. There are no miracles Is this a bare-metal installation or a VM? There could be issues with either oversubscribing resources if that's a VM or even with environment temperature in your DC so your CPUs would get throttled. (yes, I've seen such things). But if the behaviour changed, something must have changed. Question is what.
Hello team, We need to migrate deployment server from azure cloud to on-premise with new IP and Hostname. Please suggest in which .conf file we have to do changes of new IP and Hostname and also we... See more...
Hello team, We need to migrate deployment server from azure cloud to on-premise with new IP and Hostname. Please suggest in which .conf file we have to do changes of new IP and Hostname and also we need to check the license as deployment server has master license so where this master license file contains ? Kindly suggest. 
Hi @arjitg, in my opinion the issue is on the CPUs you are using: how many CPUs have in your Indexer? How many logs are you ingesting? what kind of storage are you using? you can measure the IOP... See more...
Hi @arjitg, in my opinion the issue is on the CPUs you are using: how many CPUs have in your Indexer? How many logs are you ingesting? what kind of storage are you using? you can measure the IOPS of your storage using a tool as Bonnie++, remembering that Splunk requires at least 800 IOPS. In addition, from your screenshot a see that your index queue is at 0%, so probably the issue is in the Typing pipeline. have you many Add-ons that parse your data? only If you have a performant storage (at least 800 IOPS), you can use two parallel pipelines usng in this way more resources, for more infos see at https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Pipelinesets#Configure_the_number_of_pipeline_sets . Ciao. Giuseppe
Hi @gcusello ,  I checked the data ingested via licensing so there isnt any additional influx of data into Splunk. Can sufficient resources (CPUs) for the logs volume, be an issue then? Also, you me... See more...
Hi @gcusello ,  I checked the data ingested via licensing so there isnt any additional influx of data into Splunk. Can sufficient resources (CPUs) for the logs volume, be an issue then? Also, you mentioned about IOPS. There is no increase in data & its not a new set up, then how can IOPS get impacted? Is there a way we can check more details around what could have impacted the indexqueue?    Thanks.  
hi, I'am lily. I want to get network traffic datas from keysight vision e10s(smart tab device). how to get it using stream forwarder?
Hi @arjitg, there are many factory to investigate that could cause this issue: you have a slow storage (Splunk requires at least 800 IOPS): this is the more common cause, you haven't sufficient r... See more...
Hi @arjitg, there are many factory to investigate that could cause this issue: you have a slow storage (Splunk requires at least 800 IOPS): this is the more common cause, you haven't sufficient resources (CPUs) for the logs volume you have to index: this is a frequent cause, you have too many regexes used in the typing queue: this shouldn't be the cause because also your index queue rached the 100%. Check your storage and CPUs resources. Ciao. Giuseppe
@kprior201 : Thank you for prompt response 
Hi All,  We have an Monitoring console and due to a recent release, we observed all the, aggregator queue, typing queue & index queue fill ratio has reached 100%.  I have checked indexer performance... See more...
Hi All,  We have an Monitoring console and due to a recent release, we observed all the, aggregator queue, typing queue & index queue fill ratio has reached 100%.  I have checked indexer performance dashboards in monitoring console, and I wasn't able to find out any relevant error which might have caused it. The data ingestion rate through licensing console looked same as we have every day & Can someone, please point me right steps to troubleshoot this? Thanks. 
Hi @toporagno , as @richgalloway said, in Security Essentials App and in ES Content Updates App, there are many samples of the brute force attack followed by a saccessful login, anyway, you could tt... See more...
Hi @toporagno , as @richgalloway said, in Security Essentials App and in ES Content Updates App, there are many samples of the brute force attack followed by a saccessful login, anyway, you could ttry something like this: | tstats summariesonly=true count(eval(Authentication.action="success")) AS success_count count(eval(Authentication.action="failure")) AS failure_count FROM datamodel=Authentication WHERE Authentication.action IN (success, failure) BY Authentication.user | rename Authentication.user AS user | where failure_count>=6 AND success_count>=6 That you can adapt to your data. Ciao. Giuseppe
Hi @toporagno , let me understand: you have an eventtyle like "index=my_index src=10.0.0.1" that tags these events with a tag like "MY_TAG", you have a search that uses the above src and it runs,... See more...
Hi @toporagno , let me understand: you have an eventtyle like "index=my_index src=10.0.0.1" that tags these events with a tag like "MY_TAG", you have a search that uses the above src and it runs, you want to replace the condition "src=10.0.0.1" with the condition tag=MY_TAG, and it doesn't run, is it correct? If this is your use case, as also @ITWhisperer asked, the first thing is sharing the search and the eventtype associated to the tag. Then, tag is the only field case sensitive, are you sure that the tag value is correct? Second check: did you tried to replace the condition "src=10.0.0.1" with the eventtype associated to the tag? maybe the two conditions aren't compatible. Ciao. Giuseppe
Hi @vishwa, You can use below regex; ([A-Z]+)\:\s+(.+?)\s+  
If you want to run SplunkForwarder with virtual account (which is recommended if you want to follow princpile of the least privileges) there is also a way to enable reading of sysmon logs. NT SERVICE... See more...
If you want to run SplunkForwarder with virtual account (which is recommended if you want to follow princpile of the least privileges) there is also a way to enable reading of sysmon logs. NT SERVICE/SplunkFowarder needs to be added to Event Log Readers group. One of the ways is to add it to the Group policy and deploy it accross your environment where your Forwarders are installed.  
It's working.  I just added my second forwarder.  Thanks again!
The search did not work. Looks like it is may be cause by the data index. I checked the monitoring console and did a health check. showing License warning and violations. Category showing data indexi... See more...
The search did not work. Looks like it is may be cause by the data index. I checked the monitoring console and did a health check. showing License warning and violations. Category showing data indexing.
I used Splunk Add on for AWS to send log files stored in S3 to SQS using S3 event notifications, and configured Splunk to read the log files from SQS.   However, I got an error saying that the ... See more...
I used Splunk Add on for AWS to send log files stored in S3 to SQS using S3 event notifications, and configured Splunk to read the log files from SQS.   However, I got an error saying that the S3 test message that is always sent first by S3 event notifications could not be parsed.   Splunk on EC2 is given KMS decryption privileges as shown below.   "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "sqs:*", "s3:*", "kms:Decrypt" ], "Resource": [ "arn:aws:sqs:ap-northeast-1:*************:poc-splunk-vpcflowlog*", "arn:aws:s3:::poc-splunk-vpcflowlog", "arn:aws:s3:::poc-splunk-vpcflowlog/*"     What could be the cause?
This will show you your ingest usage by sourcetype   index=_internal source=/opt/splunk/var/log/splunk/license_usage.log type=Usage | timechart limit=40 sum(b) as data by st   Look at the monito... See more...
This will show you your ingest usage by sourcetype   index=_internal source=/opt/splunk/var/log/splunk/license_usage.log type=Usage | timechart limit=40 sum(b) as data by st   Look at the monitoring console, that will also give you information on your sourcetypes/index ingestion
That looks like it is more than 128 characters into the event, so you should set up MAX_TIMESTAMP_LOOKAHEAD and optionally TIME_PREFIX for your sourcetype for that data.  
My code was an example using your data - you are using that fixed set of strings in your code - you should do the rex against your raw data not the fixed msgs field - remove the eval msgs.... and the... See more...
My code was an example using your data - you are using that fixed set of strings in your code - you should do the rex against your raw data not the fixed msgs field - remove the eval msgs.... and the mvexpand, that was just example code. Your rex statement should either use _raw or if you have those messages extracted to a separate field, use that field.
Hello! We keep going over our license usage. We cant seem to find what is causing us to go over. we've gone over 3 times now. Any suggestion on how to find what is causing this, please?