All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@dc17 - I'm not sure what logs you are trying to find in the EventViewer. Is it any known Application logs are you trying to find??
So as I understand,   | append [     | inputlookup server_info.csv     | rename customer as customer_name     | stats values(host) as hostnames by customer_name ] is going to append the hostnam... See more...
So as I understand,   | append [     | inputlookup server_info.csv     | rename customer as customer_name     | stats values(host) as hostnames by customer_name ] is going to append the hostnames from lookup into the results received from the first query. Finally, | stats count by customer_name hostnames  is going to count the value as 1 if it's present only in lookup, otherwise, if it's present in the first part of search and lookup then the count is going to be evaluated as 2? Is that correct? However, in the result there are no values with count as 2. Which is unlikely as there are few hosts which are present in events and lookup as well. Here, we try to fetch the events that contains hostnames(configured to receive application logs) and then compare them with the list of servers present in lookup(if found in lookup only then count=1).  However it seems that the query still isn't performing the required search as there are no values with count as 2. Here I observer count is returned as 1 for the hosts that are received from the events in below index=application sourcetype="application:server:log" | stats values(host) as hostnames by customer_name There is almost no trace of values in the lookup, so I'm not sure if they are even being compared. And this is what the issue earlier was.   How can these two lists be compared and listed?
Hi Hardik, You can use this ADQL as a simple SELECT `wait-state-id`, (`wait-state-id`) FROM dbmon_wait_time This query gathers data based on "wait state id" but you can change this query based on ... See more...
Hi Hardik, You can use this ADQL as a simple SELECT `wait-state-id`, (`wait-state-id`) FROM dbmon_wait_time This query gathers data based on "wait state id" but you can change this query based on "wait state name"  For Example wait state id 59 = Using CPU Thanks Cansel
Try this: | tstats count where index=cts-dcpsa-app sourcetype=app:dcpsa host_ip IN (xx.xx.xxx.xxx, xx.xx.xxx.xxx) by host | eval current_time=strftime(now(), "%H%M") | eval is_maintenance_window=if... See more...
Try this: | tstats count where index=cts-dcpsa-app sourcetype=app:dcpsa host_ip IN (xx.xx.xxx.xxx, xx.xx.xxx.xxx) by host | eval current_time=strftime(now(), "%H%M") | eval is_maintenance_window=if(current_time >= 2100 AND current_time < 0400, 1, 0) | eval is_server_down=if(count == 0, 1, 0) | where is_maintenance window = 0 AND is_server_down=1
Your sample data is inconsistently formatted, e.g. sometimes there is a space before/after the =/, Please confirm the exact pattern your data will take so we don't waste effort on invalid data.
Adding to this in case anyone else is having this issue.  It seems like when Python is executed something is attempting to write to /tmp which ends up with a memory error when /tmp is mounted with no... See more...
Adding to this in case anyone else is having this issue.  It seems like when Python is executed something is attempting to write to /tmp which ends up with a memory error when /tmp is mounted with noexec.   Our solution was to add TMPDIR=<writable path> to splunk-launch.conf. 
Since you appear to have a one-to-one relationship between label and index, just include both in the by clause <query>index IN ({aws_stack02_p,aws_stack01_p,aws_stack01_n}) | eval label = case(index... See more...
Since you appear to have a one-to-one relationship between label and index, just include both in the by clause <query>index IN ({aws_stack02_p,aws_stack01_p,aws_stack01_n}) | eval label = case(index == "aws_stack02_p", "Stack1",index == "aws_stack01_p", "Stack2",index == "aws_stack01_n", "Stack3") |stats count by label,index</query>
In order to do calculations or meaningful comparisons with dates and times, they need to be converted (parsed) to unix-style timestamps. | eval datetime_unix=strptime(DATETIME, "%F %T") | eventstats... See more...
In order to do calculations or meaningful comparisons with dates and times, they need to be converted (parsed) to unix-style timestamps. | eval datetime_unix=strptime(DATETIME, "%F %T") | eventstats max(datetime_unix) as last_datetime | where datetime_unix == last_datetime | stats count by market_code
Firstly, join is not a very friendly command, it has its quirks. In this case I'd rather use either append or inputlookup append=t Another thing - if you do mvexpand on multiple multivalued fields y... See more...
Firstly, join is not a very friendly command, it has its quirks. In this case I'd rather use either append or inputlookup append=t Another thing - if you do mvexpand on multiple multivalued fields you'll get a product of both sets of values. It can escalate quickly for bigger data sets. See the run-anywhere example | makeresults | eval a=split("1,2,3,4,5",",") | eval b=split("1,2,3,4,5",",") | mvexpand a | mvexpand b Of course you can do some fancy set arithmetics on those multivalued fields but it's usually easier done another way - count and filter. This part is OK, it lists which customers use which hosts index=application sourcetype="application:server:log" | stats values(host) as hostnames by customer_name You're gonna get a list of hosts per customer. What is important here is that each host will be listed only once per customer. So we expand our list by the servers defined for each customer | append [     | inputlookup server_info.csv     | rename customer as customer_name     | stats values(host) as hostnames by customer_name ] So now for each customer you have a single value of hostnames field per customer per each host. Nothing easier now than counting what we have | stats count by customer_name hostnames So for each pair of customer_name and hostnames values you will have a count value indicating whether it was only present in the lookup (value of 1) or both in the lookup and the indexed events (value of 2). Now you can easily manipulate the data - filter, make a table, whatever. All this assumes that you don't have any hosts in the event data which are not in the lookup. If you can have such situation, the search is getting a bit more complex because you need to add additional field with two different numerical values, depending on whether the data came from the events or from the lookup and do a sum() instead of count in your final stats so you can see where the data came from.
You are not giving much away! You will need to do some digging! Which events are not being picked up? When do they occur and when do they get indexed? How do these times relate to your alert searches... See more...
You are not giving much away! You will need to do some digging! Which events are not being picked up? When do they occur and when do they get indexed? How do these times relate to your alert searches? How important are these missed alerts? How much effort do you want to spend finding these events?
index=firewall (sourcetype=collector OR sourcetype=metadata) (enforcement_mode=block OR event_type="error") |table event_type, hostname, ip
You haven't told us what you want the search to do so I'm only guessing. Probably your hosts log events which have either enforcement_mode=block field or event_type=error field but no single event ha... See more...
You haven't told us what you want the search to do so I'm only guessing. Probably your hosts log events which have either enforcement_mode=block field or event_type=error field but no single event has both of these fields set. So your "combined" search will not find them because both conditions aren't fulfilled in a single event. That's why you need to correlate multple events by using either transaction or stats (the stats approach is preferred due to transaction command's limitations).
Hi @sphiwee, as I said, I don't know very well Powershell, try it. Ciao. Giuseppe
I need to report hosts that are configured to receive app.log details and also report the ones that are missing. For this, I use the query "index=application sourcetype="application:server:log" | sta... See more...
I need to report hosts that are configured to receive app.log details and also report the ones that are missing. For this, I use the query "index=application sourcetype="application:server:log" | stats values(host) as hostnames by customer_name". This retrieves the hostnames for each customer_name from the sourcetype.   I get a result as: customer_name host customer1 server1 customer2 server2 server3   Then, I join the result by customer_name field from the second part of the query "[| inputlookup server_info.csv | rename customer as customer_name | stats values(host) as hostnames_lookup by customer_name] which retrieves the hostnames for each customer_name from the server_info.csv lookup table."   Here I get result as: customer_name host customer_name host customer1 server1 server100 customer2 server2 server3 server101 Later, I expand both the multivalue fields and perform evaluation on both the fields to retrieve result as configured or not configured. The evaluation looks like this | mvexpand hostnames | mvexpand hostnames_lookup | eval not_configured = if(hostnames == hostnames_lookup, hostnames, null()) | eval configured = if(hostnames != hostnames_lookup, hostnames, null()) | fields customer_name, hostnames, hostnames_lookup, configured, not_configured   My final query looks like this: (index=application sourcetype="application:server:log) | stats values(host) as hostnames by customer_name | join customer_name [| inputlookup server_info.csv | rename customer as customer_name | stats values(host) as hostnames_lookup by customer_name] | mvexpand hostnames | mvexpand hostnames_lookup | eval not_configured = if(hostnames == hostnames_lookup, hostnames, null()) | eval configured = if(hostnames != hostnames_lookup, hostnames, null()) | fields customer_name, hostnames, hostnames_lookup, configured, not_configured However, in the result when the evaluation is completed the results are not as expected, the matching logic doesn't work and the resultant output is incorrect. There are no values evaluated in the not_configured column and the configured column only returns the values in hostnames. However, I'd expect the configured field to show results of all the servers configured to receive app.log and not configured to have hostnames that are present in lookup but are still not configured to receive logs.  Expected Output: customer_name hostnames hostnames_lookup configured not_configured customer1 server1 server1 server100 server1 server100 customer2 server2 server3 server2 server3 server101 server2 server3 server101   Current Output: customer_name hostnames hostnames_lookup configured not_configured customer1 server1 server1 server100 server1   customer2 server2 server3 server2 server3 server101 server2 server3     Essentially customer1 and customer2 should display server1 as configured and server100 not_configured and likewise for customer2 as mentioned in expected output table. Which will mean that server100 and 101 are part of the lookup but are not configured to receive app.log How can I evaluate this differently, so that the comparison works as expected. Is it possible to compare the values in this fashion? Is there anything wrong with the current comparison logic? Should I not use mvexpand on the extracted fields so that they are compared expectedly?        
Hi @VatsalJagani ,  Thanks for the reply, could you help me find the full path of the file/.evtx from the EventViewer? I could not find any reference from the EventViewer in my CustomViews of a fu... See more...
Hi @VatsalJagani ,  Thanks for the reply, could you help me find the full path of the file/.evtx from the EventViewer? I could not find any reference from the EventViewer in my CustomViews of a full path where the Logs are stored.  If I recollect this full path, I could perform some tests on the solution you kindly proposed to me,  Thanks
We have an issue with long JSON log events, which is longer than console width limit - they are splitted to 2 separate events, each of them is not a correct JSON. How to handle it correctly? Is it po... See more...
We have an issue with long JSON log events, which is longer than console width limit - they are splitted to 2 separate events, each of them is not a correct JSON. How to handle it correctly? Is it possible to restore broken messages on splunk side, or we need to reach logger to know about width limitation and chunk messages in a proper way? How to handle large JSON events?
@dc17 - Did the solution work for you?? If so, kindly consider accepting the answer for future Splunk users.  
Hello! I would like to run a search which would display all information regarding entities and services. For example, for Entities where could I find information stored for: Entity Description,... See more...
Hello! I would like to run a search which would display all information regarding entities and services. For example, for Entities where could I find information stored for: Entity Description, Entity Information Field, Entity Title. For Services, where could I find information stored for: Service Description, Service Title, Service Tags What type of search query could I run to find this information? Thanks,
Hi @manikanta461 , We've migrated one of our high volume index to smart store and facing the exact same issues as you've described in your post. Could you please tell me how you resolved this issue? ... See more...
Hi @manikanta461 , We've migrated one of our high volume index to smart store and facing the exact same issues as you've described in your post. Could you please tell me how you resolved this issue? And, if you've performed a roll-back, what steps should be taken to minimize data loss/impact?
Something in my solution is not right. It works for only one condition (one or another) but combined produced zero events --------- Events index=firewall (sourcetype=collector OR sourcetype=metad... See more...
Something in my solution is not right. It works for only one condition (one or another) but combined produced zero events --------- Events index=firewall (sourcetype=collector OR sourcetype=metadata) enforcement_mode=block |table event_type, hostname, ip ------------- Events index=firewall (sourcetype=collector OR sourcetype=metadata) event_type="error" |table event_type, hostname, ip ------------ No events index=firewall (sourcetype=collector OR sourcetype=metadata) enforcement_mode=block event_type="error" |table event_type, hostname, ip --------------