All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I can create a query and produce a time chart so I can see the load across the set of cpu   |timechart values(VALUE) span=15m by cpu limit=0      I can see a trend that one cpu has a higher lo... See more...
I can create a query and produce a time chart so I can see the load across the set of cpu   |timechart values(VALUE) span=15m by cpu limit=0      I can see a trend that one cpu has a higher loader I can also create a query using the stats to get the avg/Max/Range  of the load value   stats max(VALUE) as MaxV, mean(VALUE) as MeanV, range(VALUE) as Delta by _time     What I want to do is identify any CPU  that's running a higher load than avg plus some sort of fiddle factor?
Thank you for your responses @tscroggins @PickleRick 
Looking for ways to refresh client list that phone homes to Deployment server without restarting splunk service or taking access of server. We have few sources onboarded that recycles their instance... See more...
Looking for ways to refresh client list that phone homes to Deployment server without restarting splunk service or taking access of server. We have few sources onboarded that recycles their instances every 24 hours, within few days the count of clients becomes 4 times our usual number and unless done something DS will become slower, the only way to reset this list seems to be via splunk restart which we want to avoid. Anyone face something similar?
Hi @Pratyush, yes, Azure team has assigned a user to the groups they have created, and we have mapped that group with Splunk
@inventsekar @deepakc I have attached below screenshot and its showing the correct port opened and listening perfectly. Please validate at once. ON Indexer On UF On indexer On UF ... See more...
@inventsekar @deepakc I have attached below screenshot and its showing the correct port opened and listening perfectly. Please validate at once. ON Indexer On UF On indexer On UF  
Use this | spath input=payload | rename cacheStats.lds:UiApi.getRecord.* as * with or without the rename, but unless you rename, remember you need to wrap those fields in single quotes if you want ... See more...
Use this | spath input=payload | rename cacheStats.lds:UiApi.getRecord.* as * with or without the rename, but unless you rename, remember you need to wrap those fields in single quotes if you want to use them in subsequent eval statements (right hand side)  
I'm running Splunk Enterprise 9.1.1.  It is a relatively fresh installation (done this year).  Splunk forwarders are also using version 9.1.1 of the agent. The indexer is also the deployment server.... See more...
I'm running Splunk Enterprise 9.1.1.  It is a relatively fresh installation (done this year).  Splunk forwarders are also using version 9.1.1 of the agent. The indexer is also the deployment server.  Beyond that, I only have forwarders forwarding to it.  I have one Linux host (Redhat 8.9) with this problem.  I've deployed Splunk_TA_nix and enabled rlog.sh to show info from /var/log/audit/audit.log. Using today as an example (06/05/2024), I don't see entries for 06/05/2024.  But I do see logs from today under 05/06/2024. Example from the splunk search page: index="linux_hosts" host=bad_host          (last 30 days) 05/06/2024 at left side of events     audit data...........(06/05/2024 14:32:12) audit data......... As I mentioned above, I have one deployment server.   All forwarders are using the same/centralized.   Small environment, I'd say ~25 linux hosts (redhat 7 and 8).  This is the only Redhat 8 with this problem. Tried reinstalling splunk forwarder (completely deleted /path/to/splunkforwarder) once I uninstalled it. I knowa little about using props.conf with TIME_FORMAT and have not done so.  My logic is if I needed it, I'd see this on all forwarders not just the one i have with this problem. I did localectl and it shows en_US.  ausearch -i (same thing rlog.sh does) shows the dates/times as I'd expect.  Anything else I should look for  from the OS perspective?  Any suggestions on what I could do from splunk?  Also, noticed that when I go to the _internal index, dates/times are consistent.  When I use my normal index (linux_hosts) this is my one RH8 that has this problem.  Other Redhat 8 are what I'd expect. A side note here: someone else suspected this host wasn't logging.  So they did a manual import of the audit.log files.  Mind you, the dates in the file were not parsed since they didn't go through rlog.sh (ausearch -i) first.  Could this also be part of the problem?  If so, how can I undo what was done?   Thanks!
Yah, I have discussion with Sale Prepresentative in local Vietnam and they said we dont need to pay more if we want to create new instance for our Private cloud because we purchased license type with... See more...
Yah, I have discussion with Sale Prepresentative in local Vietnam and they said we dont need to pay more if we want to create new instance for our Private cloud because we purchased license type with capacity/day.
Thanks for your recommendation.
Finding something that is not there is not Splunk's strong suit.  See this blog entry for a good write-up on it. https://www.duanewaddle.com/proving-a-negative/ Consider using the TrackMe app (ht... See more...
Finding something that is not there is not Splunk's strong suit.  See this blog entry for a good write-up on it. https://www.duanewaddle.com/proving-a-negative/ Consider using the TrackMe app (https://splunkbase.splunk.com/app/4621)
Every ingested event in Splunk must have a time association. It doesn't really matter if that's just the ingested time, but a lot will depend on what you want to do with that data once it's there. A... See more...
Every ingested event in Splunk must have a time association. It doesn't really matter if that's just the ingested time, but a lot will depend on what you want to do with that data once it's there. Also, bear in mind that Splunk is generally about multiple single or multi-line events. If you're going to ingest documents that are large then Splunk is not really designed for that as there are certain soft limits that apply, such as event length limit of 10,000 chars I believe. However, there are still ways you can do what you want, e.g. break a document into lines of text and ingest those into Splunk e.g. with time, text, line#, document_name per event, so you could reconstitute the document by ordering the document rows by line number. What's your use case?
Hi , We have 2 HF active and passive, I shut off the Splunk service on 1 HF. I want to be alerted only when my 2 HFs are not sending logs/splunk service is down.  I don’t want any alerts at leas... See more...
Hi , We have 2 HF active and passive, I shut off the Splunk service on 1 HF. I want to be alerted only when my 2 HFs are not sending logs/splunk service is down.  I don’t want any alerts at least when one of the HF is running.
Hi @JKEverything  Unfortunately, it seems that Splunk has problems using spath when names contain dots, so extracting the "lds .getRecord" part and splitting it might not be that easy. However, yo... See more...
Hi @JKEverything  Unfortunately, it seems that Splunk has problems using spath when names contain dots, so extracting the "lds .getRecord" part and splitting it might not be that easy. However, you can try the following workaround:   | makeresults | eval payload = "{\"cacheStats\": {\"lds:UiApi.getRecord\": {\"hits\": 2, \"misses\": 1}}}" | spath input=payload output=cacheStats path=cacheStats | eval cacheStats = replace(cacheStats, "lds:UiApi.getRecord", "lds:UiApi_getRecord") | spath input=cacheStats path="lds:UiApi_getRecord.hits" output=hits | spath input=cacheStats path="lds:UiApi_getRecord.misses" output=misses   This would be a workaround for your use case. P.S.: Karma points are always appreciated
There is no time constraint for warm buckets.  Warm buckets roll to cold when there are too many of them (maxWarmDBCount) or there's too much data in the warm volume (homePath.maxDataSizeMB or maxVol... See more...
There is no time constraint for warm buckets.  Warm buckets roll to cold when there are too many of them (maxWarmDBCount) or there's too much data in the warm volume (homePath.maxDataSizeMB or maxVolumeDataSizeMB). See https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Configureindexstorage for more information.
I have a field payload containing the following JSON:   { "cacheStats": { "lds:UiApi.getRecord": { "hits": 0, "misses": 1 } }   I can normally ... See more...
I have a field payload containing the following JSON:   { "cacheStats": { "lds:UiApi.getRecord": { "hits": 0, "misses": 1 } }   I can normally use spath to retrieve the hits and misses values:   cacheRecordHit=spath(payload,"cacheStats.someCacheProperty.hits")   But it seems the period and possibly the colon of the lds:UiApi.getRecord property are preventing it from navigating the JSON, such that:   | eval cacheRecordHit=spath(payload,"cacheStats.lds:UiApi.getRecord.hits")     returns no data.  I have tried the solution in this answer:   | spath path=payload output=convertedPayload | eval convertedPayload=replace(convertedPayload,"lds:UiApi.getRecord","lds_UiApi_getRecord") | eval cacheRecordHit=spath(convertedPayload,"cacheStats.lds:UiApi.getRecord.hits") | stats count,sum(hits)   but hits still returns as null. Appreciate any insights.  
you have whitespaces in your query: try to use:  <base_query....>      search="*action*view*User_Management_Hourra*"     OR <base_query....>      search="*action*view*Hourra*"   best regards, ... See more...
you have whitespaces in your query: try to use:  <base_query....>      search="*action*view*User_Management_Hourra*"     OR <base_query....>      search="*action*view*Hourra*"   best regards, P.S. another question: do you have admin permissions and can access the _internal index ? index=_internal sourcetype=splunkd earliest=-1m
I'm considering loading readable/textual  files , from different formats, into splunk for getting the benefits of indexing and fast searching. Thh files are static and don't change like regular logs.... See more...
I'm considering loading readable/textual  files , from different formats, into splunk for getting the benefits of indexing and fast searching. Thh files are static and don't change like regular logs. Is this use case supported by splunk??
its not working  
Hi @Keerthi, If you have a dashboard named "Your_Dashboard_Name", you can use the following query to see who visited it: index=_internal sourcetype=splunkd_ui_access namespace=* user="*" search="*a... See more...
Hi @Keerthi, If you have a dashboard named "Your_Dashboard_Name", you can use the following query to see who visited it: index=_internal sourcetype=splunkd_ui_access namespace=* user="*" search="*action*view*Your_Dashboard_Name*"   For special fields, you may need to create your own regex to extract the required information. P.S.: Karma points are always appreciated
Yes, I'm using different sourcetype. I would  like to add addtional data that will help distinguish the logs, something like tags or sub category in sourcetype