All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Since you appear to have a one-to-one relationship between label and index, just include both in the by clause <query>index IN ({aws_stack02_p,aws_stack01_p,aws_stack01_n}) | eval label = case(index... See more...
Since you appear to have a one-to-one relationship between label and index, just include both in the by clause <query>index IN ({aws_stack02_p,aws_stack01_p,aws_stack01_n}) | eval label = case(index == "aws_stack02_p", "Stack1",index == "aws_stack01_p", "Stack2",index == "aws_stack01_n", "Stack3") |stats count by label,index</query>
In order to do calculations or meaningful comparisons with dates and times, they need to be converted (parsed) to unix-style timestamps. | eval datetime_unix=strptime(DATETIME, "%F %T") | eventstats... See more...
In order to do calculations or meaningful comparisons with dates and times, they need to be converted (parsed) to unix-style timestamps. | eval datetime_unix=strptime(DATETIME, "%F %T") | eventstats max(datetime_unix) as last_datetime | where datetime_unix == last_datetime | stats count by market_code
Firstly, join is not a very friendly command, it has its quirks. In this case I'd rather use either append or inputlookup append=t Another thing - if you do mvexpand on multiple multivalued fields y... See more...
Firstly, join is not a very friendly command, it has its quirks. In this case I'd rather use either append or inputlookup append=t Another thing - if you do mvexpand on multiple multivalued fields you'll get a product of both sets of values. It can escalate quickly for bigger data sets. See the run-anywhere example | makeresults | eval a=split("1,2,3,4,5",",") | eval b=split("1,2,3,4,5",",") | mvexpand a | mvexpand b Of course you can do some fancy set arithmetics on those multivalued fields but it's usually easier done another way - count and filter. This part is OK, it lists which customers use which hosts index=application sourcetype="application:server:log" | stats values(host) as hostnames by customer_name You're gonna get a list of hosts per customer. What is important here is that each host will be listed only once per customer. So we expand our list by the servers defined for each customer | append [     | inputlookup server_info.csv     | rename customer as customer_name     | stats values(host) as hostnames by customer_name ] So now for each customer you have a single value of hostnames field per customer per each host. Nothing easier now than counting what we have | stats count by customer_name hostnames So for each pair of customer_name and hostnames values you will have a count value indicating whether it was only present in the lookup (value of 1) or both in the lookup and the indexed events (value of 2). Now you can easily manipulate the data - filter, make a table, whatever. All this assumes that you don't have any hosts in the event data which are not in the lookup. If you can have such situation, the search is getting a bit more complex because you need to add additional field with two different numerical values, depending on whether the data came from the events or from the lookup and do a sum() instead of count in your final stats so you can see where the data came from.
You are not giving much away! You will need to do some digging! Which events are not being picked up? When do they occur and when do they get indexed? How do these times relate to your alert searches... See more...
You are not giving much away! You will need to do some digging! Which events are not being picked up? When do they occur and when do they get indexed? How do these times relate to your alert searches? How important are these missed alerts? How much effort do you want to spend finding these events?
index=firewall (sourcetype=collector OR sourcetype=metadata) (enforcement_mode=block OR event_type="error") |table event_type, hostname, ip
You haven't told us what you want the search to do so I'm only guessing. Probably your hosts log events which have either enforcement_mode=block field or event_type=error field but no single event ha... See more...
You haven't told us what you want the search to do so I'm only guessing. Probably your hosts log events which have either enforcement_mode=block field or event_type=error field but no single event has both of these fields set. So your "combined" search will not find them because both conditions aren't fulfilled in a single event. That's why you need to correlate multple events by using either transaction or stats (the stats approach is preferred due to transaction command's limitations).
Hi @sphiwee, as I said, I don't know very well Powershell, try it. Ciao. Giuseppe
I need to report hosts that are configured to receive app.log details and also report the ones that are missing. For this, I use the query "index=application sourcetype="application:server:log" | sta... See more...
I need to report hosts that are configured to receive app.log details and also report the ones that are missing. For this, I use the query "index=application sourcetype="application:server:log" | stats values(host) as hostnames by customer_name". This retrieves the hostnames for each customer_name from the sourcetype.   I get a result as: customer_name host customer1 server1 customer2 server2 server3   Then, I join the result by customer_name field from the second part of the query "[| inputlookup server_info.csv | rename customer as customer_name | stats values(host) as hostnames_lookup by customer_name] which retrieves the hostnames for each customer_name from the server_info.csv lookup table."   Here I get result as: customer_name host customer_name host customer1 server1 server100 customer2 server2 server3 server101 Later, I expand both the multivalue fields and perform evaluation on both the fields to retrieve result as configured or not configured. The evaluation looks like this | mvexpand hostnames | mvexpand hostnames_lookup | eval not_configured = if(hostnames == hostnames_lookup, hostnames, null()) | eval configured = if(hostnames != hostnames_lookup, hostnames, null()) | fields customer_name, hostnames, hostnames_lookup, configured, not_configured   My final query looks like this: (index=application sourcetype="application:server:log) | stats values(host) as hostnames by customer_name | join customer_name [| inputlookup server_info.csv | rename customer as customer_name | stats values(host) as hostnames_lookup by customer_name] | mvexpand hostnames | mvexpand hostnames_lookup | eval not_configured = if(hostnames == hostnames_lookup, hostnames, null()) | eval configured = if(hostnames != hostnames_lookup, hostnames, null()) | fields customer_name, hostnames, hostnames_lookup, configured, not_configured However, in the result when the evaluation is completed the results are not as expected, the matching logic doesn't work and the resultant output is incorrect. There are no values evaluated in the not_configured column and the configured column only returns the values in hostnames. However, I'd expect the configured field to show results of all the servers configured to receive app.log and not configured to have hostnames that are present in lookup but are still not configured to receive logs.  Expected Output: customer_name hostnames hostnames_lookup configured not_configured customer1 server1 server1 server100 server1 server100 customer2 server2 server3 server2 server3 server101 server2 server3 server101   Current Output: customer_name hostnames hostnames_lookup configured not_configured customer1 server1 server1 server100 server1   customer2 server2 server3 server2 server3 server101 server2 server3     Essentially customer1 and customer2 should display server1 as configured and server100 not_configured and likewise for customer2 as mentioned in expected output table. Which will mean that server100 and 101 are part of the lookup but are not configured to receive app.log How can I evaluate this differently, so that the comparison works as expected. Is it possible to compare the values in this fashion? Is there anything wrong with the current comparison logic? Should I not use mvexpand on the extracted fields so that they are compared expectedly?        
Hi @VatsalJagani ,  Thanks for the reply, could you help me find the full path of the file/.evtx from the EventViewer? I could not find any reference from the EventViewer in my CustomViews of a fu... See more...
Hi @VatsalJagani ,  Thanks for the reply, could you help me find the full path of the file/.evtx from the EventViewer? I could not find any reference from the EventViewer in my CustomViews of a full path where the Logs are stored.  If I recollect this full path, I could perform some tests on the solution you kindly proposed to me,  Thanks
We have an issue with long JSON log events, which is longer than console width limit - they are splitted to 2 separate events, each of them is not a correct JSON. How to handle it correctly? Is it po... See more...
We have an issue with long JSON log events, which is longer than console width limit - they are splitted to 2 separate events, each of them is not a correct JSON. How to handle it correctly? Is it possible to restore broken messages on splunk side, or we need to reach logger to know about width limitation and chunk messages in a proper way? How to handle large JSON events?
@dc17 - Did the solution work for you?? If so, kindly consider accepting the answer for future Splunk users.  
Hello! I would like to run a search which would display all information regarding entities and services. For example, for Entities where could I find information stored for: Entity Description,... See more...
Hello! I would like to run a search which would display all information regarding entities and services. For example, for Entities where could I find information stored for: Entity Description, Entity Information Field, Entity Title. For Services, where could I find information stored for: Service Description, Service Title, Service Tags What type of search query could I run to find this information? Thanks,
Hi @manikanta461 , We've migrated one of our high volume index to smart store and facing the exact same issues as you've described in your post. Could you please tell me how you resolved this issue? ... See more...
Hi @manikanta461 , We've migrated one of our high volume index to smart store and facing the exact same issues as you've described in your post. Could you please tell me how you resolved this issue? And, if you've performed a roll-back, what steps should be taken to minimize data loss/impact?
Something in my solution is not right. It works for only one condition (one or another) but combined produced zero events --------- Events index=firewall (sourcetype=collector OR sourcetype=metad... See more...
Something in my solution is not right. It works for only one condition (one or another) but combined produced zero events --------- Events index=firewall (sourcetype=collector OR sourcetype=metadata) enforcement_mode=block |table event_type, hostname, ip ------------- Events index=firewall (sourcetype=collector OR sourcetype=metadata) event_type="error" |table event_type, hostname, ip ------------ No events index=firewall (sourcetype=collector OR sourcetype=metadata) enforcement_mode=block event_type="error" |table event_type, hostname, ip --------------
It is also from both a manual search and a dashboard.
It is showing when I do | loadjob savedsearch="username:search:data_need"   It was scheduled... I change it thinking it would fix it... but it did not.
I didn't disable workload management because I couldn't enable it This feature is not supported by Windows installation These messages are generated by members of IDXC, manager node and only one SH... See more...
I didn't disable workload management because I couldn't enable it This feature is not supported by Windows installation These messages are generated by members of IDXC, manager node and only one SHC (cluster captain)          
Thanks in Advance. I have four inputs Time,Environment,Application Name and Interface Name and two panels one is fiance and bank.Both panels has different applications name and interface names.So i ... See more...
Thanks in Advance. I have four inputs Time,Environment,Application Name and Interface Name and two panels one is fiance and bank.Both panels has different applications name and interface names.So i tried to use depends and reject in the inputs.If i change one panel to another the inputs like dropdown and text box remains same but the values need to be change as per the panels. <row> <panel id="panel_layout"> <input id="input_link_split_by" type="link" token="tokSplit" searchWhenChanged="true"> <label></label> <choice value="Finance">OVERVIEW</choice> <choice value="BankIntegrations">BANKS</choice> <default>OVERVIEW</default> <initialValue>OVERVIEW</initialValue> <change> <condition label="Finance"> <set token="Finance">true</set> <unset token="BankIntegrations"></unset> </condition> <condition label="BankIntegrations"> <set token="BankIntegrations">true</set> <unset token="Finance"></unset> </condition> <row> <panel> <input type="time" token="time" searchWhenChanged="true"> <label>Time Interval</label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="env" searchWhenChanged="true"> <label>Environment</label> <choice value="*">ALL</choice> <choice value="DEV">DEV</choice> <choice value="TEST">TEST</choice> <choice value="PRD">PRD</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="applicationName" searchWhenChanged="true" depends="$Finance$" rejects="$BankIntegrations$"> <label>ApplicationName</label> <choice value="*">ALL</choice> <choice value="p-wd-finance-api">p-wd-finance-api</choice> <default>"p-wd-finance-api</default> <initialValue>p-oracle-fin-processor","p-oracle-fin-processor-2","p-wd-finance-api</initialValue> <fieldForLabel>ApplicationName</fieldForLabel> <fieldForValue>ApplicationName</fieldForValue> </input> <input type="text" token="InterfaceName" searchWhenChanged="true" depends="$Finance$" rejects="$BankIntegrations$"> <label>InterfaceName</label> <default></default> <initialValue></initialValue> </input> <input type="dropdown" token="applicationName" searchWhenChanged="true" depends="$BankIntegrations$" rejects="$Finance$"> <label>ApplicationName</label> <choice value="p-wd-finance-api">p-wd-finance-api</choice> <default>p-oracle-fin-processor","p-oracle-fin-processor-2","p-wd-finance-api</default> <initialValue>p-oracle-fin-processor","p-oracle-fin-processor-2","p-wd-finance-api</initialValue> <fieldForLabel>ApplicationName</fieldForLabel> <fieldForValue>ApplicationName</fieldForValue> </input> <input type="text" token="InterfaceName" searchWhenChanged="true" depends="$BankIntegrations$" rejects="$Finance$"> <label>InterfaceName</label> <default></default> <initialValue></initialValue> </input> </panel> </row>  
Hi, I have the raw data/Event as below, the splunk gets the rawdata  every 2 hrs once and only 4 time a day. This runs at 11.36 AM ,13.36,15:36 PM, 17:36 PM splunk gets the rawdata. Per day i am ge... See more...
Hi, I have the raw data/Event as below, the splunk gets the rawdata  every 2 hrs once and only 4 time a day. This runs at 11.36 AM ,13.36,15:36 PM, 17:36 PM splunk gets the rawdata. Per day i am getting ~2.5K events  Field:DATETIME , tells what time the job run 2024-04-15 21:36:58.960, DATETIME="2024-04-15 17:36:02", REGION="India", APPLICATION="webApp", CLIENT_CODE="ind", MARKET_CODE="SEBI", TRADE_COUNT="1" What I am looking is when i run the dashboard, where I want to monitor the trade count by market_code over latest DATETIME. For instance, if i run the dashboard at 14:00 hrs, the field DATETIME might have 11.36 (~600 events), 13.36(~600 events). I want to see only 13.36hrs 600 events, and metric would be TRADE_COUNT by MARKET_CODE Thanks, Selvam.
does this solution work remotely?