All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Start with this and look to see when you last got events for that index and which host or host is was. | tstats latest(_time) as LatestEvent where index=waf_imperva by host Then back track from the... See more...
Start with this and look to see when you last got events for that index and which host or host is was. | tstats latest(_time) as LatestEvent where index=waf_imperva by host Then back track from there to figure out why you don't have any events
The query should have the result of index = waf_imperva. However, the result is not there. How to I ensure to include waf_imperva in the query or how do I troubleshoot why not there? 
Hi @smithy001, the capacity of storages must be calculated in a Capacity Plan: yo have to define how long data remain in Warm Buckets before passing to Cold. If you have few data in Hot/Warm and a... See more...
Hi @smithy001, the capacity of storages must be calculated in a Capacity Plan: yo have to define how long data remain in Warm Buckets before passing to Cold. If you have few data in Hot/Warm and a full storage in Col status, you have to rebuild your Capacity Planning. Anyway, as I said, Cold data are usually in less expensive storage, so you should analyze your data to define what's the correct point of status change. So you could have 2 months instead of one month in Warm status, in this way you'll have better performaces in searches, but anyway you have to correctly analyze and design your data flows in a Capacity Planning. Ciao. Giuseppe
Hi @wkk, you could try something like this: index=your_index | stats values(SUBMITTED_FROM) AS SUBMITTED_FROM values(STAGE) AS STAGE BY SESSION_ID | mvexpand SUBMITTED_FROM | mvexpand STA... See more...
Hi @wkk, you could try something like this: index=your_index | stats values(SUBMITTED_FROM) AS SUBMITTED_FROM values(STAGE) AS STAGE BY SESSION_ID | mvexpand SUBMITTED_FROM | mvexpand STAGE | search SUBMITTED_FROM=startPage STAGE=submit | stats count BY SESSION_ID Ciao. Giuseppe
Hi, just to help anyone else. This builds on gmorris_splunk answer. Version:8.2.6 Below only shows Date Range. Note the removal of the commas, and the use of empty curly brackets. one thin... See more...
Hi, just to help anyone else. This builds on gmorris_splunk answer. Version:8.2.6 Below only shows Date Range. Note the removal of the commas, and the use of empty curly brackets. one thing I could not get to work, was to display only the 'Between' option . <panel> <html> <style> body, .dashboard-body, .footer, .dashboard-panel, .nav { background: #F8FCF7; } div[data-test^='time-range-dialog'] { background-color: #EDF8EB; min-width: 300px !important; width: 400px !important; } div[data-test^='body'] { background-color: #D1ECCC; } div[data-test-panel-id^='date'] {} !important; div[data-test-panel-id^='presets'] { display: none !important; } div[data-test-panel-id^='dateTime'] { display: none !important; } div[data-test-panel-id^='advanced'] { display: none !important; } div[data-test-panel-id^='realTime'] { display: none !important; } div[data-test-panel-id^='relative'] { display: none !important; } </style> </html> </panel> . 
Hi!   I have a fallowing table: SESSION_ID SUBMITTED_FROM STAGE 1   submit 1 startPage someStage1 2   submit 2 page1 someStage1 2 page2 someStage2 How could ... See more...
Hi!   I have a fallowing table: SESSION_ID SUBMITTED_FROM STAGE 1   submit 1 startPage someStage1 2   submit 2 page1 someStage1 2 page2 someStage2 How could I count the number of SESSION_IDs that has SUBMITTED_FROM=startPage and STAGE=submit? So looking at the above table the outcome of that logic should be 2 SESSION_IDs
We had a similar finding from Splunk with high I/O wait time on Search Heads. I have used the folllowing search to monitor index=_introspection sourcetype=splunk_resource_usage component=IOStats | ... See more...
We had a similar finding from Splunk with high I/O wait time on Search Heads. I have used the folllowing search to monitor index=_introspection sourcetype=splunk_resource_usage component=IOStats | eval avg_wait_ms = 'data.avg_total_ms' | search data.mount_point="/apps/splunk" | eval sla=10 | timechart limit=30 minspan=60s partial=f avg(data.avg_total_ms) as avg_wait_ms max(sla) AS sla by host Use a trellis format (split by host) timechart to dispaly. The sla=10 field is to show the 10ms Splunk recommended limit. I haven't been able to work out why we have high I/O on the Search Heads though, indexer cluster seems to perform OK. The Search Head Captain has notably higher I/O wait compared to others. There has also been issues with KV Store so wondering if that is related. Note: I/O wait time is not a configuration that can be set. It is the result of the operations being carried out on the disk
Hi, The filename is called "lookup_edit" and you can navigate to it using the UI: Settings - User Interface - Views.
It is not clear what your expected result would look like - please can you explain further
Hi All,   I have this query that runs  | tstats latest(_time) as LatestEvent where index=* by index, host | eval LatestLog=strftime(LatestEvent,"%a %m/%d/%Y %H:%M:%S") | eval duration = now() - ... See more...
Hi All,   I have this query that runs  | tstats latest(_time) as LatestEvent where index=* by index, host | eval LatestLog=strftime(LatestEvent,"%a %m/%d/%Y %H:%M:%S") | eval duration = now() - LatestEvent | eval timediff = tostring(duration, "duration") | lookup HostTreshold host | where duration > threshold | rename host as "src_host", index as "idx" | fields - LatestEvent | search NOT (index="cim_modactions" OR index="risk" OR index="audit_summary" OR index="threat_activity" OR index="endpoint_summary" OR index="summary" OR index="main" OR index="notable" OR index="notable_summary" OR index="mandiant")   The result is below   Now how do i add  index = waf_imperva . Thanks   Regards, Roger
What would be your expected output?
Change you fieldForLabel and fieldForValue attributes <fieldForLabel>st_time</fieldForLabel> <fieldForValue>st_time</fieldForValue>
Thanks for the reply...I understand the use of 2 separate volumes.   I was asking if anyone could see a situation where the cold [spindle] volume could become full whilst the hot/warm[ssd] still ha... See more...
Thanks for the reply...I understand the use of 2 separate volumes.   I was asking if anyone could see a situation where the cold [spindle] volume could become full whilst the hot/warm[ssd] still had capacity if both were sized the same... 6 months on SSD 6 months on spindle...
the actual config has not been decided yet.   I'm trying to find the best one to spread the buckets across the indexers.
Thanks much @ITWhisperer It really worked
thank you @ITWhisperer  Above solution worked
Hi @law175 , have you some message from Splunk? the behavior you described is the one of violation: this occurs if you index more logs of your daily quota for more than 2 times in 30 solar days on ... See more...
Hi @law175 , have you some message from Splunk? the behavior you described is the one of violation: this occurs if you index more logs of your daily quota for more than 2 times in 30 solar days on a Trial License or more than 45 times in 60 solar days for a Term License. Check in the [Settings > License] page. If you're not in violation there could be another situation: are you using the index in your main search? In other words, try to add index=your_index or index=* at the beginning of your search because your index isn't in the default search path. Third possibility: have you the rights to read data from that index? Ciao. Giuseppe
Hi @gjhaaland, after a timechart command yo have only as columns the count and the values of the src_sg_info field. user1, user2 atc... are values of src_sg_info field? if they are values of  src... See more...
Hi @gjhaaland, after a timechart command yo have only as columns the count and the values of the src_sg_info field. user1, user2 atc... are values of src_sg_info field? if they are values of  src_sg_info you have to change the values before the timestamp using eval not rename (rename is to change te name of a field): index=asa host=1.2.3.4 src_sg_info=* | eval src_sg_info=case(src_sg_info="user1","David E",src_sg_info="user2","Mary E",src_sg_info="user3","Lucy E") | timechart span=10m dc(src_sg_info) by src_sg_info Ciao. Giuseppe
Hi, Code is following index=asa host=1.2.3.4 src_sg_info=* | timechart span=10m dc(src_sg_info) by src_sg_info | rename user1 as "David E" | rename user2 as "Mary E" | rename user3 as "Lucy E" ... See more...
Hi, Code is following index=asa host=1.2.3.4 src_sg_info=* | timechart span=10m dc(src_sg_info) by src_sg_info | rename user1 as "David E" | rename user2 as "Mary E" | rename user3 as "Lucy E" If number of user is 0, then we know theres is no VPN user at all. Plan is to print it out together with active VPN user in timechart if possible. Try to explain how it looks below.                                                                                                             user2                                                                                                           user3                No Vpn user                                                                                                    No VPN user time .....................................................................................................................................................................................................................................    
Hi @m_pham, yes I tried but this information remains always the same In addition I found that sometimes the vaues for te Search Heads are displayed as "N.A." and I notices that sometimes (not alway... See more...
Hi @m_pham, yes I tried but this information remains always the same In addition I found that sometimes the vaues for te Search Heads are displayed as "N.A." and I notices that sometimes (not always), forcing Apply Configuration, some of the values in the Summary dashboard are dispayed, but not always and not all. Splunk Support said to me that the second one it's a known bug that will be solved in 9.1.2. Ciao. Giuseppe