All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Kudos for digging - glad you found a solution - could you quantify the performance hit?
So what did you try and what was the result and how do you want your timechart to look in that context?
As @ITWhisperer says, if this drilldown is taking you to a search in a new Splunk window, then you have no control over the format 
Please explain what you mean by "spath does not work".  It works for me in this run-anywhere example (escape characters added to satisfy the SPL parser).  What is your query?  What results do you exp... See more...
Please explain what you mean by "spath does not work".  It works for me in this run-anywhere example (escape characters added to satisfy the SPL parser).  What is your query?  What results do you expect and what do you get?     | makeresults | eval data="{\"time\":\"time_here\",\"kubernetes\":{\"host\":\"host_name_here\",\"pod_name\":\"pod_name_here\",\"namespace_name\":\"namespace_name_here\",\"labels\":{\"app\":\"app_label\"}},\"log\":{\"jobId\":\"job_id_here\",\"dc\":\"dc_here\",\"stdout\":\"{ \\\"Componente\\\" : \\\"componente_here\\\", \\\"channel\\\" : \\\"channel_here\\\", \\\"timestamp\\\" : \\\"timestamp_here\\\", \\\"Code\\\" : \\\"code_here\\\", \\\"logId\\\" : \\\"logid_here\\\", \\\"service\\\" : \\\"service_here\\\", \\\"responseMessage\\\" : \\\"responseMessage_here\\\", \\\"flow\\\" : \\\"flow_here\\\", \\\"log\\\" : \\\"log_here\\\"}\",\"level\":\"info\",\"host\":\"host_worker_here\",\"flow\":\"flow_here\",\"projectName\":\"project_name_here\",\"caller\":\"caller_here\"},\"cluster_id\":\"cluster_id_here\"}" | spath input=data | transpose   And the results  
Hello! As the subject of the question says, I'm trying to create SPL queries for several visualizations but it has become very tedious since spath does not work with the outputted events, as they co... See more...
Hello! As the subject of the question says, I'm trying to create SPL queries for several visualizations but it has become very tedious since spath does not work with the outputted events, as they come in a string format, making it very hard to work with more complex operations  The event contents are in a valid json format (checked using jsonformatter) Here's the event output:{"time":"time_here","kubernetes":{"host":"host_name_here","pod_name":"pod_name_here","namespace_name":"namespace_name_here","labels":{"app":"app_label"}},"log":{"jobId":"job_id_here","dc":"dc_here","stdout":"{ \"Componente\" :  \"componente_here\", \"channel\" :  \"channel_here\", \"timestamp\" :  \"timestamp_here\", \"Code\" :  \"code_here\", \"logId\" :  \"logid_here\", \"service\" :  \"service_here\", \"responseMessage\" :  \"responseMessage_here\", \"flow\" :  \"flow_here\", \"log\" :  \"log_here\"}","level":"info","host":"host_worker_here","flow":"flow_here","projectName":"project_name_here","caller":"caller_here"},"cluster_id":"cluster_id_here"}
Hello Community! I have recently been trying to get the AppDynamics work for monitoring an application which is built using NodeJS. It is a frontend application, which is hosted using an EC2. The w... See more...
Hello Community! I have recently been trying to get the AppDynamics work for monitoring an application which is built using NodeJS. It is a frontend application, which is hosted using an EC2. The webserver is Apache, which then proxy forwards the requests to the Node app running on a custom port within the EC2. I tried adding the generated script into the main.js startup file (which I used to start the node app using pm2 service), then tried restarting the web app, created load on the web app, but to no success. The connection check window keeps looping and nothing happens. I have successfully connected DB and Machine agents, but unable to get this NodeJS app monitored. My project's hosting directory doesn't include any server.js files where I can add the code snippet. As far as I can tell, it only has a main.js file and an index.html file. I am unable to get the code snippet to work till now. Any insights on this would be highly appreciated. Thanks!
Hello!  Is it possible to implement something like this? I have 300+ devices that send logs to one index. I want to check if there are no logs from the device for more than one minute then execute ... See more...
Hello!  Is it possible to implement something like this? I have 300+ devices that send logs to one index. I want to check if there are no logs from the device for more than one minute then execute an alert. When the device resumed logs then also a warning. And immediately after the warning update the csv file. My search now looks like this: | tstats latest(_time) as lastSeen where index IN("my_devs") earliest=-2m latest=now by host | lookup devs_hosts_names.csv host OUTPUT dev_name | eval dev_name = if(isnotnull(dev_name),dev_name,"unknow host") | eval status = if((now() - lastSeen<=60),"up","down") | eval status = if(isnotnull(lastSeen),status,"unknow") | search NOT [| inputlookup devs_status.csv | fields host dev_name status] | convert ctime(*Seen) | table host dev_name status lastSeen | - At this time of search I would like to trigger an alert for each dev_name and then rewrite (update)  devs_status.csv  But I don't find how it can be realized, I ask for your help. I'm new to splunk and don't understand how much this kind of request is normal for splunk? Thanks.  
It depends on what you are trying to do since this search seems to be the opposite of what you had previously said you were trying to do.
I have below query .how to include into result query.pls advise     Need to include this one into result query | stats values(event) as events by customer | where NOT (events = "s1" AND events = ... See more...
I have below query .how to include into result query.pls advise     Need to include this one into result query | stats values(event) as events by customer | where NOT (events = "s1" AND events = "s2" AND events = "s3") Result query: (index=1 sourcetype="abc" "s1 event received" and "s2 event received" and "s3 event received") OR (index=2 sourcetype="xyz" "created") | rex "(?<e_type>s.) event received for (?<customer>\d+)" | rex "(?<created>created) for (?<customer>\d+)" | stats max(eval(if(e_type="s3",_time, null()))) as last_e_type max(eval(if(created="created", _time, null()))) as created_time dc(e_type) as e_types values(created) as created by customer | addinfo | where e_types=3 AND (created_time-last_e_type > 300 OR (isnull(created_time) AND info_max_time - last_e_type > 300)
| rex "customer (?<customer>\d+)"
How to extract customer number ?
How to extract customer number from the event .pls advise 
Assuming customer and event have already been extracted | stats values(event) as events by customer | where NOT (events = "s1" AND events = "s2" AND events = "s3")
The AND operator works within a single event.  To combine multiple events you need to use an aggregating command.  Assuming the customer number has been extracted into a field called "customer" then ... See more...
The AND operator works within a single event.  To combine multiple events you need to use an aggregating command.  Assuming the customer number has been extracted into a field called "customer" then this will trigger an alert if any customer does not have all three events. <<some search for S1, S2, and S3>> | stats count by customer | where count < 3  
I have below message in the splunk log   Ex : s1 event has been received for customer 15778 S2 event has been received for customer 15778 S3 event has been received for customer 15778   I want ... See more...
I have below message in the splunk log   Ex : s1 event has been received for customer 15778 S2 event has been received for customer 15778 S3 event has been received for customer 15778   I want to check all S1,S2,S3 event has been received message present in the particular customer.i used AND condition but not able to achieve.plesse help me on this. As per my scenario,if i have 1 lakhs customer, i want to check for all 3 events has been received mesage is present in the splunk log for one particular customer.if not present all 3 mesage i need to set up alert.
Try including the _time as well in your search  Either using timechart or by _time bucket  index=_internal source=*license_usage.log* type=Usage idx=*| eval GB=b/1024/1024/1024 |timchart span=1d s... See more...
Try including the _time as well in your search  Either using timechart or by _time bucket  index=_internal source=*license_usage.log* type=Usage idx=*| eval GB=b/1024/1024/1024 |timchart span=1d sum(gb) by st   or index=_internal source=*license_usage.log* type=Usage idx=*| eval GB=b/1024/1024/1024 |bin _time span=1d | stats sum(GB) by _time,st
I am not sure why you are not adapting my previous suggestion - try something like this | stats count(eval(like(message, "%READ-ERROR -&gt; DP is temporarily down%"))) as read_error_count count(Code... See more...
I am not sure why you are not adapting my previous suggestion - try something like this | stats count(eval(like(message, "%READ-ERROR -&gt; DP is temporarily down%"))) as read_error_count count(Code) as count_by_id |eval read_error_count = if(read_error_count &gt; count_by_id/2 ,mvappend(read_error_count,"RED"), read_error_count)  
| stats count(eval(like(message, %READ-ERROR -&gt; DP is temporarily down%))) as read_error_count |stats count(Code) as count_by_id |eval color_value = if(read_error_count &gt; count/2 ,0,1) i want ... See more...
| stats count(eval(like(message, %READ-ERROR -&gt; DP is temporarily down%))) as read_error_count |stats count(Code) as count_by_id |eval color_value = if(read_error_count &gt; count/2 ,0,1) i want color read_error_count column by color_value result thanks @ITWhisperer 
Correct, the question is not identical, but the essence of the question is, that is, how to colour cells based on the value in another cell on the same row, in your case you want the read_error_count... See more...
Correct, the question is not identical, but the essence of the question is, that is, how to colour cells based on the value in another cell on the same row, in your case you want the read_error_count colour based on the value of the count_by_id. To do this, as I said earlier, and is shown in more detail in the referenced solution, you use mvappend to add a second value to the read_error_coiunt field based on the value of the count_by_id field | eval read_error_count=if(count_by_id > 10, mvappend(read_error_count,"RED"), read_error_count) Then use CSS to hide the second multi-value cell and set the palette to change the colour if "RED" is a value You can use other strings and colours to your requirements.
When I install TA-Demisto App in single Addhoc SH node, it works fine, but when install in using SH Deployer for SH Cluster nodes, does not work and gives me Error:   Configuration page failed to l... See more...
When I install TA-Demisto App in single Addhoc SH node, it works fine, but when install in using SH Deployer for SH Cluster nodes, does not work and gives me Error:   Configuration page failed to load, the server reported internal errors which may indicate you do not have access to this page. Error: Request failed with status code 500 ERR0002 ==> Any idea how to fix that error ?