All Posts

Top

All Posts

Yah, I have discussion with Sale Prepresentative in local Vietnam and they said we dont need to pay more if we want to create new instance for our Private cloud because we purchased license type with... See more...
Yah, I have discussion with Sale Prepresentative in local Vietnam and they said we dont need to pay more if we want to create new instance for our Private cloud because we purchased license type with capacity/day.
Thanks for your recommendation.
Finding something that is not there is not Splunk's strong suit.  See this blog entry for a good write-up on it. https://www.duanewaddle.com/proving-a-negative/ Consider using the TrackMe app (ht... See more...
Finding something that is not there is not Splunk's strong suit.  See this blog entry for a good write-up on it. https://www.duanewaddle.com/proving-a-negative/ Consider using the TrackMe app (https://splunkbase.splunk.com/app/4621)
Every ingested event in Splunk must have a time association. It doesn't really matter if that's just the ingested time, but a lot will depend on what you want to do with that data once it's there. A... See more...
Every ingested event in Splunk must have a time association. It doesn't really matter if that's just the ingested time, but a lot will depend on what you want to do with that data once it's there. Also, bear in mind that Splunk is generally about multiple single or multi-line events. If you're going to ingest documents that are large then Splunk is not really designed for that as there are certain soft limits that apply, such as event length limit of 10,000 chars I believe. However, there are still ways you can do what you want, e.g. break a document into lines of text and ingest those into Splunk e.g. with time, text, line#, document_name per event, so you could reconstitute the document by ordering the document rows by line number. What's your use case?
Hi , We have 2 HF active and passive, I shut off the Splunk service on 1 HF. I want to be alerted only when my 2 HFs are not sending logs/splunk service is down.  I don’t want any alerts at leas... See more...
Hi , We have 2 HF active and passive, I shut off the Splunk service on 1 HF. I want to be alerted only when my 2 HFs are not sending logs/splunk service is down.  I don’t want any alerts at least when one of the HF is running.
Hi @JKEverything  Unfortunately, it seems that Splunk has problems using spath when names contain dots, so extracting the "lds .getRecord" part and splitting it might not be that easy. However, yo... See more...
Hi @JKEverything  Unfortunately, it seems that Splunk has problems using spath when names contain dots, so extracting the "lds .getRecord" part and splitting it might not be that easy. However, you can try the following workaround:   | makeresults | eval payload = "{\"cacheStats\": {\"lds:UiApi.getRecord\": {\"hits\": 2, \"misses\": 1}}}" | spath input=payload output=cacheStats path=cacheStats | eval cacheStats = replace(cacheStats, "lds:UiApi.getRecord", "lds:UiApi_getRecord") | spath input=cacheStats path="lds:UiApi_getRecord.hits" output=hits | spath input=cacheStats path="lds:UiApi_getRecord.misses" output=misses   This would be a workaround for your use case. P.S.: Karma points are always appreciated
There is no time constraint for warm buckets.  Warm buckets roll to cold when there are too many of them (maxWarmDBCount) or there's too much data in the warm volume (homePath.maxDataSizeMB or maxVol... See more...
There is no time constraint for warm buckets.  Warm buckets roll to cold when there are too many of them (maxWarmDBCount) or there's too much data in the warm volume (homePath.maxDataSizeMB or maxVolumeDataSizeMB). See https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Configureindexstorage for more information.
I have a field payload containing the following JSON:   { "cacheStats": { "lds:UiApi.getRecord": { "hits": 0, "misses": 1 } }   I can normally ... See more...
I have a field payload containing the following JSON:   { "cacheStats": { "lds:UiApi.getRecord": { "hits": 0, "misses": 1 } }   I can normally use spath to retrieve the hits and misses values:   cacheRecordHit=spath(payload,"cacheStats.someCacheProperty.hits")   But it seems the period and possibly the colon of the lds:UiApi.getRecord property are preventing it from navigating the JSON, such that:   | eval cacheRecordHit=spath(payload,"cacheStats.lds:UiApi.getRecord.hits")     returns no data.  I have tried the solution in this answer:   | spath path=payload output=convertedPayload | eval convertedPayload=replace(convertedPayload,"lds:UiApi.getRecord","lds_UiApi_getRecord") | eval cacheRecordHit=spath(convertedPayload,"cacheStats.lds:UiApi.getRecord.hits") | stats count,sum(hits)   but hits still returns as null. Appreciate any insights.  
you have whitespaces in your query: try to use:  <base_query....>      search="*action*view*User_Management_Hourra*"     OR <base_query....>      search="*action*view*Hourra*"   best regards, ... See more...
you have whitespaces in your query: try to use:  <base_query....>      search="*action*view*User_Management_Hourra*"     OR <base_query....>      search="*action*view*Hourra*"   best regards, P.S. another question: do you have admin permissions and can access the _internal index ? index=_internal sourcetype=splunkd earliest=-1m
I'm considering loading readable/textual  files , from different formats, into splunk for getting the benefits of indexing and fast searching. Thh files are static and don't change like regular logs.... See more...
I'm considering loading readable/textual  files , from different formats, into splunk for getting the benefits of indexing and fast searching. Thh files are static and don't change like regular logs. Is this use case supported by splunk??
its not working  
Hi @Keerthi, If you have a dashboard named "Your_Dashboard_Name", you can use the following query to see who visited it: index=_internal sourcetype=splunkd_ui_access namespace=* user="*" search="*a... See more...
Hi @Keerthi, If you have a dashboard named "Your_Dashboard_Name", you can use the following query to see who visited it: index=_internal sourcetype=splunkd_ui_access namespace=* user="*" search="*action*view*Your_Dashboard_Name*"   For special fields, you may need to create your own regex to extract the required information. P.S.: Karma points are always appreciated
Yes, I'm using different sourcetype. I would  like to add addtional data that will help distinguish the logs, something like tags or sub category in sourcetype
Hello All,   The question is is IOWAIT mean anything? I am in the process of upgrading Splunk 8.2.12 to 9.1.2, and then 9.2.1.  I have not yet upgraded to 9.1.2. The Health Report is set at defau... See more...
Hello All,   The question is is IOWAIT mean anything? I am in the process of upgrading Splunk 8.2.12 to 9.1.2, and then 9.2.1.  I have not yet upgraded to 9.1.2. The Health Report is set at default settings i.e. 3, etc.I have tried the suggestion of doubling threshold vales, but eventually get a Warning yellow, or sometimes red, etc. I am running Splunk Enterprise 8.2.12 on an Oracle Linux (ver 7.9) with 12 cpu and 64GB memory.  Do these settings have any benefit for the IOWAIT thresholds?   I see where I can disable IOWAIT - or does it make any sense to try to generate some sort if Diag, which has a link when opeing the "Health Report Manager" Any info here? Am I missing something? Thanks as always for a very helpful Splunk community.   EWHOLZ   I
We see cases where warm buckets are not being moved to cold storage for six weeks, and we wonder how to set it up correctly so they move within two or three weeks.
Hi All,   Has anyone explored the https://github.com/splunk/splunk-conf-imt ? We  have splunk cloud, wondering how can I proceed with testing this as the steps are not quite clear to me. Appreciate... See more...
Hi All,   Has anyone explored the https://github.com/splunk/splunk-conf-imt ? We  have splunk cloud, wondering how can I proceed with testing this as the steps are not quite clear to me. Appreciate the help.
You need to follow these steps (its basic SMTP connection) for alerts for Splunk cloud or on premise.  https://docs.splunk.com/Documentation/Splunk/9.2.1/Alert/Emailnotification  There's not that... See more...
You need to follow these steps (its basic SMTP connection) for alerts for Splunk cloud or on premise.  https://docs.splunk.com/Documentation/Splunk/9.2.1/Alert/Emailnotification  There's not that many settings for this in Splunk - so it should work providing your SMTP / Email server allows for this.  We point to an SMTP server as per the config above.  If its not working and you feel your have setup according to the Splunk docs, I would look at your "Exchange Admin Centre" and consult the Admin to ensure Splunk can send to to the SMTP server.  The 
Ideally, Splunk would know it's creating an event that's too large and modify TRUNCATE accordingly for that sourcetype.  For log messages that glob together several pieces of information at run-time ... See more...
Ideally, Splunk would know it's creating an event that's too large and modify TRUNCATE accordingly for that sourcetype.  For log messages that glob together several pieces of information at run-time (like many audit events), the true size of the event won't be known in advance.
@sivakrishnaHave you assigned the user to a group in Azure AD and mapped that group to a role in Splunk?
i wrote this code but no luck index=_internal sourcetype=splunkd_ui_access | rex field=uri_path "app/(?<app>[^/]+)/(?<dashboard>[^/]+)" | search app="dashboards" dashboard="User_Management_Ho... See more...
i wrote this code but no luck index=_internal sourcetype=splunkd_ui_access | rex field=uri_path "app/(?<app>[^/]+)/(?<dashboard>[^/]+)" | search app="dashboards" dashboard="User_Management_Hourra" | stats dc(user) as unique_users