All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Cheng2Ready  You can use REST for that, like in this example: | rest /servicesNS/-/-/saved/searches splunk_server=local | search action.snow_incident=1 | table title, disabled, action.snow_inci... See more...
Hi @Cheng2Ready  You can use REST for that, like in this example: | rest /servicesNS/-/-/saved/searches splunk_server=local | search action.snow_incident=1 | table title, disabled, action.snow_incident.param.assignment_group, action.snow_incident.param.contact_type The fields related to the alert actions in ServiceNow follow the pattern action.snow_event* or action.snow_incident*
Are you sure you're not talking about first 256 bytes of monitored file? (of course the header length is configurable). The only duplication detection I recall is connected with useACK and even then ... See more...
Are you sure you're not talking about first 256 bytes of monitored file? (of course the header length is configurable). The only duplication detection I recall is connected with useACK and even then it indexes an event twice but emits a warning AFAIR.
Did you enable inputs in the TA? Is data from the TA being indexed?
I have installed UF in one of the servers and did install the unix addon. Then restarted the uf services. However the entities didn't populate in ITSI page. Could someone please help on the same.
In Red hat OpenShift on premises cluster i need to collect logs, metrics, and traces of the cluster  , when there is no internet connection on the on prime cloud how can i do this ?
@livehybrid @gcusello  My requirement is I have to send events via Alert_Webhook. So we need to allow the Sender IP (in My case -Splunk Cloud)  at the receiving end of the webhook. What IP do we need... See more...
@livehybrid @gcusello  My requirement is I have to send events via Alert_Webhook. So we need to allow the Sender IP (in My case -Splunk Cloud)  at the receiving end of the webhook. What IP do we need to whitelist and where do we get that IP from?
Splunk does not work like a database in this respect. So, it depends on how Splunk has been set up to detect "duplicates" of this nature. This is normally done with searches in reports or alerts or d... See more...
Splunk does not work like a database in this respect. So, it depends on how Splunk has been set up to detect "duplicates" of this nature. This is normally done with searches in reports or alerts or dashboards. These will normally depend on your data. What searches do you already have set up? What does your data look like? How is it being ingested into Splunk? What criteria do you want to use to determine that an event represents a duplicate? Please provide as much detail as you can (without giving away sensitive information).
Hi @NevilleRadcliff , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by... See more...
Hi @NevilleRadcliff , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @ws , Splunk indexes all new data with the only exception when all the first 256 chars of the event are the same. Then (after indexing) you can dedup results eventually excluding duplicated data... See more...
Hi @ws , Splunk indexes all new data with the only exception when all the first 256 chars of the event are the same. Then (after indexing) you can dedup results eventually excluding duplicated data from results based on your requirements. Deduping is usually done related to one or more fields; it's also possible to search full duplicated deduping for _raw. Ciao. Giuseppe
Hi @NWC  Unfortunately the "Qualys Technology Add-on (TA) for Splunk" is not supported/compatible with Splunk Cloud. When you go to the the Splunkbase page (https://splunkbase.splunk.com/app/2964) ... See more...
Hi @NWC  Unfortunately the "Qualys Technology Add-on (TA) for Splunk" is not supported/compatible with Splunk Cloud. When you go to the the Splunkbase page (https://splunkbase.splunk.com/app/2964) and click on the "Version History" tab - it shows the compatibility. In order for it to be installed on Splunk Cloud it needs to be listed in the compatibility cell for that version. Unfortunately as this isnt cloud compatible you will not be able to install it on your Splunk Cloud instance. You might want to consider contacting Qualys to see if they update the app to make it Splunk Cloud compatible. Note; When installing apps on Splunk Cloud, the system checks the app ID against apps which are held in Splunkbase - if the app with the same ID exists in Splunkbase then it will suggest installing it via the App Browser page. Obviously this is only possible if the app is cloud compatible. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
How about using the "top" command, something like this? index=_internal group=per_index_thruput series=* | top 10 host  
Hi @hansmaldonado  The easiest thing might be to push an update out via the Cluster Manager to point to the new home path, however this will ultimately mean that you have zero cache and subsequent s... See more...
Hi @hansmaldonado  The easiest thing might be to push an update out via the Cluster Manager to point to the new home path, however this will ultimately mean that you have zero cache and subsequent searches may be slow whilst the cache re-populates.  If you wanted to retain the cached data to prevent this then I think this may be possible, depending on your configuration/architecture. While in maintenance mode, shutdown one indexer at a time, move the cached files from the existing location to the home path and then update the indexes.conf to reflect the new path. Once you start the indexer back up you will have the original cache files locally on that indexer but in the new location.  You will then need to do this for each indexer. This isnt necessarily the ideal way to do it but will mean that you do not need to re-download cached data. This will also mean that you indexes.conf will vary between indexers until you have completed. Once complete you should push out an updated indexes.conf via the CM with the updated settings so that you arent in a position where it could revert back!  I would recommend trying this approach in a development environment first to ensure you are happy with the process involved.  Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Currently using an customize App to connect to a case / monitoring system and retrieve data. I found out that, Splunk has the ability to detect if the data has already been indexed.  But if I have ... See more...
Currently using an customize App to connect to a case / monitoring system and retrieve data. I found out that, Splunk has the ability to detect if the data has already been indexed.  But if I have the following scenario? will it consider as a duplicate or new data? since it has a new close case timing for the update close case. One of the previously closed cases has been reopened and closed again with a new case closed time. will Splunk enterprise consider as a new data to index?
Thanks, I will keep it in mind.
Hi @Ben , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
indexes.conf [volume:hot] path=/mnt/splunk/hot maxVolumeDataSizeMB = 40 [volume:cold] path = /mnt/splunk/cold maxVolumeDataSizeMB = 40 [A] homePath = volume:hot/A/db coldPath = volume:cold/A/coldd... See more...
indexes.conf [volume:hot] path=/mnt/splunk/hot maxVolumeDataSizeMB = 40 [volume:cold] path = /mnt/splunk/cold maxVolumeDataSizeMB = 40 [A] homePath = volume:hot/A/db coldPath = volume:cold/A/colddb maxDataSize = 1 maxTotalDataSizeMB = 90 thawedPath = $SPLUNK_DB/A/thaweddb [_internal] homePath = volume:cold/_internaldb/db coldPath = volume:cold/_internaldb/colddb thawedPath = $SPLUNK_DB/_internaldb/thaweddb maxDataSize = 1 maxTotalDataSizeMB = 90 I collected data from each index, and the percentage stored in cold volume was A=30MB, _internaldb/db=10MB. This was understood to account for a larger percentage because the data volume and collection speed of the A index was larger and faster than that of _internal collection. If you stop collecting data from the A index and maintain data collection only for the _internal index, the old buckets in _internaldb/db will be moved to _internaldb/colddb in the order they were loaded in _internaldb/db, and will not be maintained in colddb in the order in which they were loaded in _internaldb/db, but will be immediately deleted. Additionally, data that existed in A/colddb is deleted in oldest order. I understood that the cold volume is limited to 40 and the cold volume is already full, so it will not be maintained in _internaldb/colddb and will be immediately deleted. However, why is the data in A/colddb deleted? Afterwards, when the A/colddb capacity reaches 20, A/colddb is not deleted. The behavior I expected was that if A/colddb capacity is deleted until it becomes 0, the old buckets in _internaldb/db would be moved to _internaldb/colddb and then maintained. I'm curious why the results are different from what I expected, and if maxTotalDataSizeMB is the same, the Volume maintains the same ratio.
Thanks again
What can I say? The aligntime option works for me. Like index=firewall earliest=-15m | timechart count span=1m aligntime=30
1. This is not Splunk Support. This is a volunteer-powered users community. 2. We have no knowledge of who you are or where you are let alone who your Account Manager is. 3. As you are a customer, ... See more...
1. This is not Splunk Support. This is a volunteer-powered users community. 2. We have no knowledge of who you are or where you are let alone who your Account Manager is. 3. As you are a customer, you should have someone with whom you've dealt before. If you can't find it, try to use either Sales (preferably) or Support contact for your location - https://www.splunk.com/en_us/about-splunk/contact-us.html
I have a deja vu. I think I answered the same question recently. But to the point. 1) There is no way to create an inputs with a dynamic definition using just Splunk built-in mechanisms. 2) It's h... See more...
I have a deja vu. I think I answered the same question recently. But to the point. 1) There is no way to create an inputs with a dynamic definition using just Splunk built-in mechanisms. 2) It's hard to believe that you have a decently sized environment without any standarization. If you do, I strongly advise to get it cleaned up because otherwise it will bite you in the most inconvenient place at the most inconvenient time. 3) A very ugly way to try to go around it could be to define an "input" running your script which would generate inputs.conf dynamically but this would require bending over backwards to handle forwarder restarts. I would very strongly (as opposed to just "strongly" from previous point) advise against it.