All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Without seeing your events it is difficult to determine what you need to do with the tstats to get the data you want.
I need this query to use top command, but looks like it should be rewritten first in some kind of way 
this is exactly why I'm here. My tstats query isn't completed, I need this data to be shown in logs as it used to be in my usual query (non-tstats one)
Try reducing the lines until the error goes away to find out where the breakpoint is
So your conversion to tstats is not complete then? Using the data you get back from tstats is there sufficient information for you to compile the results you want (or do you need a different version ... See more...
So your conversion to tstats is not complete then? Using the data you get back from tstats is there sufficient information for you to compile the results you want (or do you need a different version of the tstats search?
I have two different data sets within the Updates data model. I catered a few panels within a dashboard that I use to collect the installed updates and update errors. I want to combine both of these ... See more...
I have two different data sets within the Updates data model. I catered a few panels within a dashboard that I use to collect the installed updates and update errors. I want to combine both of these searches into one by combining the datasets to correlate which machines are updating or occurring errors. Here's the two searches I have so far.  Installed Updates:  | datamodel Updates Updates search | rename Updates.dvc as host | rename Updates.status as "Update Status" | rename Updates.vendor_product as Product | rename Updates.signature as "Installed Update" | eval isOutlier=if(lastTime <= relative_time(now(), "-60d@d"), 1, 0) | `security_content_ctime(lastTime)` | eval time = strftime(_time, "%m-%d-%y %H:%M:%S") | search * host=$host$ | rename lastTime as "Last Update Time", | table time host "Update Status" "Installed Update" | `no_windows_updates_in_a_time_frame_filter` Update Errors:  | datamodel Updates Update_Errors search | eval time = strftime(_time, "%m-%d-%y %H:%M:%S") | search * host=$host$ | table _time, host, _raw,    
Hi @tomjb94 , could you share some sample of your logs? Anyway, if in your logs there's only one timestamp, you could try to use only TIME_FORMAT without TIME_PREFIX. ciao. Giuseppe
Thanks Dural for response. Then will need to figure out Dell Unity storage technicals and see how we can do the GDI.
Since both of these are data source content issues it's difficult to determine from the Splunk side.  I would start with more research at a specific machine side.  What traffic is being generated and... See more...
Since both of these are data source content issues it's difficult to determine from the Splunk side.  I would start with more research at a specific machine side.  What traffic is being generated and from what application.  Who if anyone is logged in live.
I have this use case and want to report on bytes by dest_hostname. After adjusting for current Palo field names, the provided answer yields no results: index=firewalls sourcetype=pan:traffic dest_z... See more...
I have this use case and want to report on bytes by dest_hostname. After adjusting for current Palo field names, the provided answer yields no results: index=firewalls sourcetype=pan:traffic dest_zone=untrust dest_port=443 [search index=firewalls sourcetype=pan:threat | fields dest_hostname] | stats sum(bytes) BY dest_hostname  
You really need to investigate your internal logs for bucket replication messages to get an idea of what is happening or not happening.  There are so many contributing factors to what could be occurr... See more...
You really need to investigate your internal logs for bucket replication messages to get an idea of what is happening or not happening.  There are so many contributing factors to what could be occurring it would be difficult to provide an answer at this point.
I am playing around with the splunk-rolling-upgrade app in our DEV environment.  We dont use a kvstore there and we dont use a kvstore on our indexers in PROD either.  Which is were I would like to u... See more...
I am playing around with the splunk-rolling-upgrade app in our DEV environment.  We dont use a kvstore there and we dont use a kvstore on our indexers in PROD either.  Which is were I would like to use this once I sort out the process.  However, the automated upgrade process appears to be failing because it is looking for a healthy kvstore.  Is there a flag or something I can put into the rolling_upgrade.conf file so that it ignores the kvstore?  Especially when it comes to our CM and Indexers where we have the kvstore disabled.
Checking history in answers and Dell/EMC websites this has been an issue for a few years, no obvious solutions were ever provided.
Hello to everyone! My question looks very dummy, but I really can't understand how I can resolve it. So, what we are having step by step: 1. Some network device that sends an event via UDP directl... See more...
Hello to everyone! My question looks very dummy, but I really can't understand how I can resolve it. So, what we are having step by step: 1. Some network device that sends an event via UDP directly to an indexer 2. Indexer receives message according to capture of wireshark 3. Then I'm trying to find this event on a searchhead, and I see nothing 4. Somehow I generate another event on the network device 5. Then I expect to see two events during the search, but I see only the previous one This behavior is a little bit random but easy to reproduce with network devices that send events unfrequently. And, additionally, I can easily detect wrong behavior because of the significant difference between _time and _indextime of those events. A couple of words about indexer settings, props.conf on indexer looks like this, nothing special:   [cat_syslog] DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = true MAX_TIMESTAMP_LOOKAHEAD = 24 SHOULD_LINEMERGE = false TIME_PREFIX = ^<\d{1,3}>\d+:\s+.*:\s+\d+:\s+   Overall, what I can assume. 1. According to my props.conf, indexer expecting to find default ([\r\n]+) to apply line-breaking rule and create single event 2. But for some reason fails in it 3. From this moment, the indexer waits until the next event 4. An, I don't know why,  but ([\r\n}+) appears in the next message So, the question is, how to NOT wait until next event in this situation? I also understand that I can't change the line-breaking rule because of very unrequent events. And also, there are no special characters at the end of events because they look like this:   <172>153702: 172.21.0.13: 153696: Sep 13 16:30:50.797 RTZ: %RADIUS-4-RADIUS_ALIVE: RADIUS server 172.28.20.80:1812,1813 is being marked alive. <174>153700: 172.21.0.13: 153694: Sep 13 16:30:30.714 RTZ: %RADIUS-6-SERVERALIVE: Group AAA_RADIUS: Radius server 172.21.20.80:1812,1813 is responding again (previously dead). <173>153695: 172.21.0.13: 153689: Sep 13 16:25:05.626 RTZ: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/9, changed state to up  
Hi -  I have a quick props question. I need to write a props for a particular sourcetype, and the messages always start with before the timestamp starts: ukdc2-pc-sfn122.test.local - OR ukdc2-pc-s... See more...
Hi -  I have a quick props question. I need to write a props for a particular sourcetype, and the messages always start with before the timestamp starts: ukdc2-pc-sfn122.test.local - OR ukdc2-pc-sfn121.test.local -  When writing the TIME_PREFIX can a regex be written to account for this, is it just a basic one if so can someone provide this? Thanks  
Checking in other Answers it doesn't appear that "extracted_eventtype" is specific to DUO logs or app extractions.  Leads me to believe this is automagically generated at search time via Splunk defau... See more...
Checking in other Answers it doesn't appear that "extracted_eventtype" is specific to DUO logs or app extractions.  Leads me to believe this is automagically generated at search time via Splunk default behavior.
Thank you! Unfortuantely | rex "#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*.*#HLE#" | rex "#HLS#\s*IID:\s*[^,]+,\s*STEP:\s*(?P<STEP>[^,]+),\s*.*#HLE#" | rex "#HLS#\s*IID:\s*[^,]+,\s*STEP:\s*[^,]+,\s*PKEY:\s... See more...
Thank you! Unfortuantely | rex "#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*.*#HLE#" | rex "#HLS#\s*IID:\s*[^,]+,\s*STEP:\s*(?P<STEP>[^,]+),\s*.*#HLE#" | rex "#HLS#\s*IID:\s*[^,]+,\s*STEP:\s*[^,]+,\s*PKEY:\s*(?P<PKEY>.*?),\s*.*#HLE#" | rex "#HLS#\s*IID:\s*[^,]+,\s*STEP:\s*[^,]+,\s*PKEY:\s*.*?,\s*STATE:\s*(?P<STATE>[^,]+),\s*.*#HLE#" | rex "#HLS#\s*IID:\s*[^,]+,\s*STEP:\s*[^,]+,\s*PKEY:\s*.*?,\s*STATE:\s*[^,]+,\s*MSG0:\s*(?P<MSG0>.*?),\s*.*#HLE#" | rex "#HLS#\s*IID:\s*[^,]+,\s*STEP:\s*[^,]+,\s*PKEY:\s*.*?,\s*STATE:\s*[^,]+,\s*MSG0:\s*.*?,\s*EXCID:\s*(?P<EXCID>[a-zA-Z_]+),\s*.*#HLE#" | rex "#HLS#\s*IID:\s*[^,]+,\s*STEP:\s*[^,]+,\s*PKEY:\s*.*?,\s*STATE:\s*[^,]+,\s*MSG0:\s*.*?,\s*EXCID:\s*[a-zA-Z_]+,\s*PROPS:\s*(?P<PROPS>[^#]+)\s*#HLE#" did not help much.
Hi @BRFZ , every dashboard in Splunk is a search, you can Open in Search the panel (using the button with the same name) and see how it's written to modify it, in few words, this: index=_internal [... See more...
Hi @BRFZ , every dashboard in Splunk is a search, you can Open in Search the panel (using the button with the same name) and see how it's written to modify it, in few words, this: index=_internal [`set_local_host`] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | timechart span=1d sum(b) AS volumeB by h fixedrange=false | join type=outer _time [search index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | dedup _time stack | stats sum(stacksz) AS "stack size" by _time] | fields - _timediff | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] Ciao. Giuseppe
Copy and Paste in a formatted or code window the exact command you are using.  The web browser accepts some special characters that something like wget can not interpret.
| tstats count where index=* index!=_* by host This will only give you a count of events, there is no insight to size of storage requirements.  You can add sourcetype and/or source after the host fi... See more...
| tstats count where index=* index!=_* by host This will only give you a count of events, there is no insight to size of storage requirements.  You can add sourcetype and/or source after the host field if you need more detailed information. It all depends upon what your specific goals are.