All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@gcusello  Thanks for your reply but the problem is am not able to find like which index is causing the issue at the point of time which making the bucket count increase also . How can we see t... See more...
@gcusello  Thanks for your reply but the problem is am not able to find like which index is causing the issue at the point of time which making the bucket count increase also . How can we see the data ingestion trend for today?
I have a 3 node search head cluster and distributed indexers we are getting below error when running any type of search . suggest any ways to avoid it error: (indexers)..........of 41 peers omitted]... See more...
I have a 3 node search head cluster and distributed indexers we are getting below error when running any type of search . suggest any ways to avoid it error: (indexers)..........of 41 peers omitted] Could not load lookup=LOOKUP-connect_glpi
Hi @Praz_123 , before checking the volume of data, check if you have connection issues (also temporary) between Indexers and CM. You should have messages. Anyway, to check te volume of indexed dat... See more...
Hi @Praz_123 , before checking the volume of data, check if you have connection issues (also temporary) between Indexers and CM. You should have messages. Anyway, to check te volume of indexed data, you can use the License Monitoring feature in Settings or the Monitoring Console. Ciao. Giuseppe
Status is unchanged. Splunk ES is still long way from having multi tenancy supported as it is. Request for multi-tenancy is marked as "Future prospect" at Splunk Ideas portal: Add native multi-tenanc... See more...
Status is unchanged. Splunk ES is still long way from having multi tenancy supported as it is. Request for multi-tenancy is marked as "Future prospect" at Splunk Ideas portal: Add native multi-tenancy capability to Enterprise Security | Ideas.
How we can check the data coming to Splunk creating problem to CM making it unstable leading the peers to reach more than 2k+ and also RF and SF are red .
@marnall - Yes I thought about doing that myself, as you said it's not 'clean' though, we shouldn't really have to.  
Hi @ramuzzini , sorry but there's a thing that I don't understand: if you forced the EventCode = 4720, why you extracted the eventcode from your results? you can have only one result EventCode = 47... See more...
Hi @ramuzzini , sorry but there's a thing that I don't understand: if you forced the EventCode = 4720, why you extracted the eventcode from your results? you can have only one result EventCode = 4720. if the EventCode=4720 is fixed and you want to pass this EventCode to the drilldown you can insert it in the drilldown row: (Set $token_eventcode$ = 4720 otherwise, you could modify your search in the Single Value: Acct Enable: index="wineventlog " EventCode=4720 | stats count BY EvenCode in this way you have the EventCode value to pass using drilldown, even if, as I said, you don't need it. If instead the issue is that the EventCode=4720 is passed using an input, so it can change, you can use my second solution or using the input token value in the drilldown (Set $token_eventcode$ = $input_token$ Ciao. Giuseppe
Hi @tungpx , the usual way to see if a Forwarder configuration is updated is to chech if updates are running or not, but anyway you could try to create an index time field with the update version an... See more...
Hi @tungpx , the usual way to see if a Forwarder configuration is updated is to chech if updates are running or not, but anyway you could try to create an index time field with the update version and check it. This is a description about how to do it: https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Configureindex-timefieldextraction Ciao. Giuseppe
hi @Kenny_splunk , I agree with @PickleRick , you should try to describe what you did to try to understand what happend. Anyway, probably the issue is in the moved folders. But if you deleted the ... See more...
hi @Kenny_splunk , I agree with @PickleRick , you should try to describe what you did to try to understand what happend. Anyway, probably the issue is in the moved folders. But if you deleted the installation, it's very difficoult to recover the installation, unless you can restore a backup. Maybe (and I say maybe) Splunk Support can help you. Anyway, to tra a last chance, you could try to move the indexes from the now position to a new safe one and then create a new fresh installation that should run. Then you could stop Splunk and copy the saved indexes folders to the new position of $SPLUNK_DB (by default $SPLUNK_HOME/var/log/splunk), or change the value of $SPLUNK_DB pointing to the new position of indexes. Then, at least, you should create all the stanzas of your indexes in one indexes.conf using exactly the same names of your indexes. In this way it should run, let us know if you solved. Ciao. Giuseppe
Hi @tej57, Thanks for your answer. Hope this functionality will be realised soon. It's a reason for us to keep using the classic version for most of the dashboards.
Thanks, but I already tried that and does not work.
Ok. You did something. And now your environment somehow doesn't work. Not knowing that something and somehow (and not even knowing what version we're talking about; I can only assume we're talking L... See more...
Ok. You did something. And now your environment somehow doesn't work. Not knowing that something and somehow (and not even knowing what version we're talking about; I can only assume we're talking Linux version) how are we supposed to know what's going on and how to fix it?
Hello, I have a deployment server and deploy an app on an Universal Forwarder, like I usually do (Create an app folder -> create local folder -> write input.conf -> setup app, server class on DS, ti... See more...
Hello, I have a deployment server and deploy an app on an Universal Forwarder, like I usually do (Create an app folder -> create local folder -> write input.conf -> setup app, server class on DS, tick disable/enable app, tick restart Splunkd). But after make sure the log path and permission of the log file (664), I don't see the log forwarded.  I'm only manage the Splunk Deloyment but not the server that host universal forwarder so I asked the system team to check it for me. After sometime, they get back to me and said there is no change on the input.conf file. They have to manually restart splunk on the Universal Forwarder and after that I see the log finally ingested.  So I want to know if there is an app, or a way to check if the app or the input.conf was changed according to my config on the DS or not, I can't ask the system team to check for it for me all time time.  Thank you. 
I have checked in splunkd.log, haven't noticed any particular error or warning related to this. also web-ui log  
Hello, We are experiencing an issue with the SOCRadar Threat Feed app in our Splunk cluster. The app is configured to download threat feeds every 4 hours; however, each feed pull results in duplicat... See more...
Hello, We are experiencing an issue with the SOCRadar Threat Feed app in our Splunk cluster. The app is configured to download threat feeds every 4 hours; however, each feed pull results in duplicate events being downloaded and indexed. We need assistance in configuring the app to prevent this duplication and ensure data deduplication before being saved to the indexers.
Haven't noticved any particular error in splunkd.log / UI-access logs   
Since your last update on 21 Oct 2016 stating that Splunk Enterprise Security does not support multi-tenancy, what is the status right now? Does Splunk Enterprise Security is now support multi-tenancy?
This worked! Much appreciated, thank you.
Thank you for your reply. UDP 514 port was in use. I have  no idea why it is used by another process. So, I needed to use another port to receive packets from palo alto server. However I solved thi... See more...
Thank you for your reply. UDP 514 port was in use. I have  no idea why it is used by another process. So, I needed to use another port to receive packets from palo alto server. However I solved this problem. The firewalld daemon was blocking the packets coming in Splunk. I stopped the firewalld, and could search the palo alto logs. I go for the next step of issuing alerts from these logs.
Try these props.conf settings. [dolphin] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)\d\d:\d\d:\d\d\d DATETIME_CONFIG = current