All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @gcusello  i noticed that Splunk does not support the Add-on for WorkspaceOne.and has no documentation. is there any supported app to parse the Vmware workspace one-MDM  
Thanks @marnall , I will talk to CB team for the clarity. Thanks for informing about different product types of Carbon Black. I was requiring a live query action on CB cloud app but did not find it. ... See more...
Thanks @marnall , I will talk to CB team for the clarity. Thanks for informing about different product types of Carbon Black. I was requiring a live query action on CB cloud app but did not find it. So was thinking if I may use any other CB app. I found the action in splunk-soar-connectors/carbonblackresponse but did not test yet whether it will work for CB cloud. Else I need to directly call the CB cloud APIs to to execute the query. I have submitted an issue for the CB cloud app to include this as an action Carbon Black live query to search devices is absent within Carbon Black cloud SOAR app · Issue #16 · splunk-soar-connectors/carbonblackcloud.
Hi @shoaibalimir , storage dimensioning is a job for an architect! Aniway, it depends on if you have a cluster or not, if not you can calculate the storega in this way: storage = (average_license_... See more...
Hi @shoaibalimir , storage dimensioning is a job for an architect! Aniway, it depends on if you have a cluster or not, if not you can calculate the storega in this way: storage = (average_license_consuption_by_day / 2 )* retention  if you have a cluster you must add the Replication Factor anf the Search Factor. Ciao. Giuseppe
I am checking this time from the backend of splunk, i am tailing the file splunkd.log and it shows a different time than system time
Hi @Praz_123 , as I s<id, in the License consuption page or in the Monitoring Console, you can see the ingestion rate for each index. Ciao. Giuseppe
Hi @santhipriya , the message is saying the there's a missed loookup (probaly automatic) in your search head cluster. you have to understand in which app it's located and then create or disable it.... See more...
Hi @santhipriya , the message is saying the there's a missed loookup (probaly automatic) in your search head cluster. you have to understand in which app it's located and then create or disable it. Ciao. Giuseppe
Hi, I have an use case in which I need to assess the storage difference of the index. Like for example, I have an index which has around 100.15 GB of data in it with Searchable Retention Days as 10... See more...
Hi, I have an use case in which I need to assess the storage difference of the index. Like for example, I have an index which has around 100.15 GB of data in it with Searchable Retention Days as 1095 Days. Now, if I reduce the Searchable Retention Days to let's say 365 Days, then what would be the approximate storage utilization on the Index. I need to output these results onto a tabular form on a dashboard for the same. Please assist me on this. Thank you in advance.  
Hello, I am reaching out to inquire whether Splunk SOAR currently supports Red Hat Enterprise Linux 9 (RHEL9). We are considering an upgrade to our infrastructure and want to ensure compatibility wi... See more...
Hello, I am reaching out to inquire whether Splunk SOAR currently supports Red Hat Enterprise Linux 9 (RHEL9). We are considering an upgrade to our infrastructure and want to ensure compatibility with Splunk SOAR. Thank you!
@gcusello  Thanks for your reply but the problem is am not able to find like which index is causing the issue at the point of time which making the bucket count increase also . How can we see t... See more...
@gcusello  Thanks for your reply but the problem is am not able to find like which index is causing the issue at the point of time which making the bucket count increase also . How can we see the data ingestion trend for today?
I have a 3 node search head cluster and distributed indexers we are getting below error when running any type of search . suggest any ways to avoid it error: (indexers)..........of 41 peers omitted]... See more...
I have a 3 node search head cluster and distributed indexers we are getting below error when running any type of search . suggest any ways to avoid it error: (indexers)..........of 41 peers omitted] Could not load lookup=LOOKUP-connect_glpi
Hi @Praz_123 , before checking the volume of data, check if you have connection issues (also temporary) between Indexers and CM. You should have messages. Anyway, to check te volume of indexed dat... See more...
Hi @Praz_123 , before checking the volume of data, check if you have connection issues (also temporary) between Indexers and CM. You should have messages. Anyway, to check te volume of indexed data, you can use the License Monitoring feature in Settings or the Monitoring Console. Ciao. Giuseppe
Status is unchanged. Splunk ES is still long way from having multi tenancy supported as it is. Request for multi-tenancy is marked as "Future prospect" at Splunk Ideas portal: Add native multi-tenanc... See more...
Status is unchanged. Splunk ES is still long way from having multi tenancy supported as it is. Request for multi-tenancy is marked as "Future prospect" at Splunk Ideas portal: Add native multi-tenancy capability to Enterprise Security | Ideas.
How we can check the data coming to Splunk creating problem to CM making it unstable leading the peers to reach more than 2k+ and also RF and SF are red .
@marnall - Yes I thought about doing that myself, as you said it's not 'clean' though, we shouldn't really have to.  
Hi @ramuzzini , sorry but there's a thing that I don't understand: if you forced the EventCode = 4720, why you extracted the eventcode from your results? you can have only one result EventCode = 47... See more...
Hi @ramuzzini , sorry but there's a thing that I don't understand: if you forced the EventCode = 4720, why you extracted the eventcode from your results? you can have only one result EventCode = 4720. if the EventCode=4720 is fixed and you want to pass this EventCode to the drilldown you can insert it in the drilldown row: (Set $token_eventcode$ = 4720 otherwise, you could modify your search in the Single Value: Acct Enable: index="wineventlog " EventCode=4720 | stats count BY EvenCode in this way you have the EventCode value to pass using drilldown, even if, as I said, you don't need it. If instead the issue is that the EventCode=4720 is passed using an input, so it can change, you can use my second solution or using the input token value in the drilldown (Set $token_eventcode$ = $input_token$ Ciao. Giuseppe
Hi @tungpx , the usual way to see if a Forwarder configuration is updated is to chech if updates are running or not, but anyway you could try to create an index time field with the update version an... See more...
Hi @tungpx , the usual way to see if a Forwarder configuration is updated is to chech if updates are running or not, but anyway you could try to create an index time field with the update version and check it. This is a description about how to do it: https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Configureindex-timefieldextraction Ciao. Giuseppe
hi @Kenny_splunk , I agree with @PickleRick , you should try to describe what you did to try to understand what happend. Anyway, probably the issue is in the moved folders. But if you deleted the ... See more...
hi @Kenny_splunk , I agree with @PickleRick , you should try to describe what you did to try to understand what happend. Anyway, probably the issue is in the moved folders. But if you deleted the installation, it's very difficoult to recover the installation, unless you can restore a backup. Maybe (and I say maybe) Splunk Support can help you. Anyway, to tra a last chance, you could try to move the indexes from the now position to a new safe one and then create a new fresh installation that should run. Then you could stop Splunk and copy the saved indexes folders to the new position of $SPLUNK_DB (by default $SPLUNK_HOME/var/log/splunk), or change the value of $SPLUNK_DB pointing to the new position of indexes. Then, at least, you should create all the stanzas of your indexes in one indexes.conf using exactly the same names of your indexes. In this way it should run, let us know if you solved. Ciao. Giuseppe
Hi @tej57, Thanks for your answer. Hope this functionality will be realised soon. It's a reason for us to keep using the classic version for most of the dashboards.
Thanks, but I already tried that and does not work.
Ok. You did something. And now your environment somehow doesn't work. Not knowing that something and somehow (and not even knowing what version we're talking about; I can only assume we're talking L... See more...
Ok. You did something. And now your environment somehow doesn't work. Not knowing that something and somehow (and not even knowing what version we're talking about; I can only assume we're talking Linux version) how are we supposed to know what's going on and how to fix it?