All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Brackets in the wrong place and it looks like the else part of the first if should start with another if | eval Test= if( (like('thrown.extendedStackTrace',"%403%"),"403", if(like('thrown.extendedSt... See more...
Brackets in the wrong place and it looks like the else part of the first if should start with another if | eval Test= if( (like('thrown.extendedStackTrace',"%403%"),"403", if(like('thrown.extendedStackTrace',"%404%"),"404","###ERROR####"))
Hello @ITWhisperer, Thank you for your response. If I take duration of last 10 days, for some hours I get more index data points than event data points, and over the time it changes to more event d... See more...
Hello @ITWhisperer, Thank you for your response. If I take duration of last 10 days, for some hours I get more index data points than event data points, and over the time it changes to more event data points to index data points. Additionally, there is no clear cyclic pattern observed during the day when this change happens, that some duration of time former is observed and some other duration latter is observed. The problem I am trying to solve is to identify potential data ingestion issue that may exist with respect to a given data source. That is the anticipated pattern would have event data points and indexed data points closely follow same pattern and the event data points volume will be slightly greater than or equal to indexed data points. But, when the event data points pattern and indexed data points pattern are not same over the same period of time or we may have more number of indexed data points than event data points, that is when the issue may be occurring. This is the problem I am trying to identify at run time or close to run time and later address with respect to each data source. Please share if the above helps to answer your questions to seek more guidance on the topic. Thank you
Create a search to find the data you want from your index, then use outputlookup to send it to a lookup source.
  Set this alert to run every 30 minutes looking back for 1 hour index=my_index source="/var/log/nginx/access.log" | stats avg(request_time) as Average_Request_Time | streamstats count as weight |... See more...
  Set this alert to run every 30 minutes looking back for 1 hour index=my_index source="/var/log/nginx/access.log" | stats avg(request_time) as Average_Request_Time | streamstats count as weight | eval alert=if(Average_Request_Time>1,weight,0) | stats sum(alert) as alert | where alert==1  
@ITWhisperer thanks for the solution, i did little changes as per my desired results.
Hey Experts, I'm new to splunk and I'm trying to create a new lookup from data in a index=abc. Can someone please guide me on how to achieve this? Any help or example queries would be greatly appreci... See more...
Hey Experts, I'm new to splunk and I'm trying to create a new lookup from data in a index=abc. Can someone please guide me on how to achieve this? Any help or example queries would be greatly appreciated. Thank You!
Hello @ITWhisperer , i am trying to get the details of "the volume of data ingestion, broken down by index group" i tried this SPL unable to get the results in the table index=summary source="sp... See more...
Hello @ITWhisperer , i am trying to get the details of "the volume of data ingestion, broken down by index group" i tried this SPL unable to get the results in the table index=summary source="splunk-ingestion" |dedup keepempty=t _time idx |stats sum(ingestion_gb) as ingestion_gb by _time idx |bin _time span=1h |eval ingestion_gb=round(ingestion_gb,3) |eval group_field=if(searchmatch("idx=.*micro.*group1"), "group1",searchmatch("idx=.*soft.*"), "group2", true(), "other") |timechart limit=0 span=1d sum(ingestion_gb) as GB by group_field We are having list of indexes like: AZ_micro micro AD_micro Az_soft soft AZ_soft From the above indexes 'micro' are grouped under the name 'microgroup', while the indexes 'soft' are grouped under 'softgroup', and so on like below. so, in the table i want to show the volume of the "groups" like ------------------------------------------ group name         |               volume ------------------------------------------ microgroup         |              <0000> softgroup             |              <0000>
  index=my_index source="/var/log/nginx/access.log" | stats avg(request_time) as Average_Request_Time | where Average_Request_Time >1   I have this query setup as an alert if my web app request d... See more...
  index=my_index source="/var/log/nginx/access.log" | stats avg(request_time) as Average_Request_Time | where Average_Request_Time >1   I have this query setup as an alert if my web app request duration goes over 1 second and this searches back over a 30 min window. I want to know when this alert has recovered.  So I guess effectively running this query twice against 1st 30 mins of an hour then 2nd 30 mins of an hour then give me a result I can alert when that gets returned.  The result would be an indication that the 1st 30 mins was over 1 second average duration and the 2nd 30 mins was under 1 second average duration and thus, it recovered. I have no idea where to start with this!  But I do want to keep the alert query above for my main alert of an issue and have a 2nd alert query for this recovery element.  Hoep this is possible.
Are you saying that for every hour there are more index data points than event data points, or that it happens sometimes? Even then, lets say you have a lag between the event time and the index time... See more...
Are you saying that for every hour there are more index data points than event data points, or that it happens sometimes? Even then, lets say you have a lag between the event time and the index time and that indexing happens at 5 minutes past, but the events picked up are timestamped from 5 minutes before to 5 minutes past. The count for that index time will include events which are not in that hour. Index time and event time are two different scales running independently of each other. Depending on your source data, events may be indexed before or after their event time. What problem is it that you are trying to solve?
did you get any solution?
Works perfect, thanks!
I found that using the following match condition is enough to get the job done. <condition match="$row.gender$==&quot;female&quot;">  Thanks for your answer. It lets me find out that there is a th... See more...
I found that using the following match condition is enough to get the job done. <condition match="$row.gender$==&quot;female&quot;">  Thanks for your answer. It lets me find out that there is a thing called conditional drilldown!
Having a similar issue, | eval Test= if( (like('thrown.extendedStackTrace',"%403%"),"403"),(like('thrown.extendedStackTrace',"%404%"),"404"),"###ERROR####") But getting error as --> Error ... See more...
Having a similar issue, | eval Test= if( (like('thrown.extendedStackTrace',"%403%"),"403"),(like('thrown.extendedStackTrace',"%404%"),"404"),"###ERROR####") But getting error as --> Error in 'EvalCommand': The expression is malformed. Expected ).  
Excellent, that worked.. Thank You !!
Single quotes around field names with dots in | eval Test1=substr('thrown.extendedStackTrace', 1, 3)
Dears, I'm trying to install Event Service 23.7 with Elastic Search 8  the scenario: 1- adding host by user root => and this is only applicable by the root user as I tried to make it by another us... See more...
Dears, I'm trying to install Event Service 23.7 with Elastic Search 8  the scenario: 1- adding host by user root => and this is only applicable by the root user as I tried to make it by another user and go with SSH passwordless but it didn't work. ##if you advise adding hosts by different user okay I will share the error then for this.  2- after adding host by root, I go to events service            a- The default installation directory is /opt/appdynamics/platform/product/events-service/data and I change                       it to be /home/appd-user/appdynamics-event-service-path/ in this way I get error                    #Unable to check the health of Events Service hosts [events-03, events-02, events-01] through port 9080                       or it runs and then stops in seconds.                   #and in logs elastic search stopped => cause running by root user                   #my question here is also, why does the installation of events service in each service go under                                                     /opt/appdynamics to solve this issue I have added the hosts with appd user but in each event service server  sudo chown -R appd:appd /opt/ sudo chmod -R u+w /opt/ the adding host went well and the installation of the event service went well but some of the event services stopped with an error elastic search running with the root user even it's run with appd user. and to solve this issue I have run manually from each server  bin/events-service.sh stop -f && rm -r events-service-api-store.id && rm -r elasticsearch.id nohup bin/events-service.sh start -p conf/events-service-api-store.properties & but anyway, I see all of this is work around and I want something more stable and I'm sure someone will guide me with the right steps.  BR Abdulrahman Kazamel
index=foo sourcetype=json_foo source="az-foo" |rename tags.envi as env |search env="*A00001*" OR env="*A00002*" OR env="*A00005*" OR env="*A00020*" |stats count by env |eval env=case(match(env,"A0000... See more...
index=foo sourcetype=json_foo source="az-foo" |rename tags.envi as env |search env="*A00001*" OR env="*A00002*" OR env="*A00005*" OR env="*A00020*" |stats count by env |eval env=case(match(env,"A00001"),"PBC",match(env,"A00002"),"PBC",match(env,"A00005"),"KCG",match(env,"A00020"),"TTK",true(),env)
Hi @Mr_Sneed , good for you, see next time! let us know if we can help you more, or, please, accept one answer (eventually the your one) for the other people of Community. Ciao and happy splunking... See more...
Hi @Mr_Sneed , good for you, see next time! let us know if we can help you more, or, please, accept one answer (eventually the your one) for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
in splunk.log I had an interesting log that mentioned something about the hostname and not being able to resolve it. I changed the hostname and everything works. Thanks for the help
Hi @Mr_Sneed , as you can read at https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Deploymentclientconf , you have to insert in your deploymentclient.conf: [target-broker:deploymentServer] ... See more...
Hi @Mr_Sneed , as you can read at https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Deploymentclientconf , you have to insert in your deploymentclient.conf: [target-broker:deploymentServer] targetUri = 10.1.10.69:8089 that's the output of the "splunk set deploy-poll" command, not other. Then you should check (using telnet if the route on port 8089 between the client and the Deployment Server is open. Ciao. Giuseppe