All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Usually it’s best to use totally different home directory for user splunk like /home/splunk and even set this user locked and use nologin or something similar as a login shell. I suppose that you hav... See more...
Usually it’s best to use totally different home directory for user splunk like /home/splunk and even set this user locked and use nologin or something similar as a login shell. I suppose that you have Unix admins or use google to switch home directory to correct one.
we are doing  some API/app deployment in  one region at 12: 00 PM EST , the 1 st time frame would be 11:30 AM to 12:00 PM EST ( I need to get the error count ) the 2nd time frame would be 12:00 PM ... See more...
we are doing  some API/app deployment in  one region at 12: 00 PM EST , the 1 st time frame would be 11:30 AM to 12:00 PM EST ( I need to get the error count ) the 2nd time frame would be 12:00 PM to 12:30 PM EST( need to get error count ) we need to consider generated log volume as well . and get the deviation on the error count on these two time frames . let's say , if it exceeds certain threshold  , I will further proceed /stop the deployment . so the out put of query is deviation threshold or percentage 
we are doing  some API/app deployment in  one region at 12: 00 PM EST , the 1 st time frame would be 11:30 AM to 12:00 PM EST ( I need to get the error count ) the 2nd time frame would be 12:00AM t... See more...
we are doing  some API/app deployment in  one region at 12: 00 PM EST , the 1 st time frame would be 11:30 AM to 12:00 PM EST ( I need to get the error count ) the 2nd time frame would be 12:00AM to 12:30 PM EST( need to get error count ) we need to consider generated log volume as well . and get the deviation on the error count on these two time frames . let's say , if it exceeds certain threshold  , I will further proceed /stop the deployment . so the out put of query is deviation threshold or percentage 
I faced this same issue. Resolved it by adding list_storage_passwords capability to Non-admin Role
You will need a common value in the two types of events to correlate events.  For example, if each pair has a unique transaction ID, you can do | stats values(Resp_time) as Resp_time values(Req_time... See more...
You will need a common value in the two types of events to correlate events.  For example, if each pair has a unique transaction ID, you can do | stats values(Resp_time) as Resp_time values(Req_time) as Req_time by transaction_id | eval Resp_time - Req_time Alternatively, if you have some other ways to determine a pairing, e.g., the two always happen within a deterministic interval,  e.g., request comes in at 5 minute into the hour, a unique response is sent within the hour and NO other request would come in during the same hour, you can use that as criterion.  There may be other conditions where you would use transaction.  Unless you give us the exact condition, mathematically there is no solution.
Instead of stats, use eventstats. index="oap" | eventstats perc25(tt) as P25, perc50(tt) as P50, perc75(tt) as P75 by oper | foreach P25 P50 P75 [eval <<FIELD>>count = if... See more...
Instead of stats, use eventstats. index="oap" | eventstats perc25(tt) as P25, perc50(tt) as P50, perc75(tt) as P75 by oper | foreach P25 P50 P75 [eval <<FIELD>>count = if(tt><<FIELD>>, 1, 0)] | stats values(P*count) as P*count by oper P25 P50 P75
Something like this - obviously you will need to adjust it depending on your events and required time periods index="prod_k8s_onprem_dii--prod1" "k8s.namespace.name"="abc-secure-dig-servi-prod1" "k8... See more...
Something like this - obviously you will need to adjust it depending on your events and required time periods index="prod_k8s_onprem_dii--prod1" "k8s.namespace.name"="abc-secure-dig-servi-prod1" "k8s.container.name"="abc-cdf-cust-profile" (earliest=first_earliest latest=first_latest) OR (earliest=second_earliest latest=second_latest) | eval period=if(_time>=first_earliest AND _time<first_latest,"First","Second") | stats count(eval(status="Error")) as error_count count as event_count by period
Ok, will try to expain it ....   there are thousand of digits, of course values can repeat. So first I want to  divide them ( in that case) into quartile. In my case : 0-25, 25-50, 50-75,75-100. T... See more...
Ok, will try to expain it ....   there are thousand of digits, of course values can repeat. So first I want to  divide them ( in that case) into quartile. In my case : 0-25, 25-50, 50-75,75-100. Then , and this is my problem, count how many values has every section/quartile. In my case I need 4 pairs : value<>quantity.  Is it more clear ...       
You need to tell volunteers what kind of "two time frames" are you concerned about.  Two adjacent, equal time intervals? Two equal intervals days apart?  Or some random intervals?
I just added additional  SEDCMD-removereset=s/\x1B\[0;m//g  
So, how does your rex command extract src_host_2, Service_2, and State_2 when they don't exist in the events?
Hi @Manish.Talukdar, If John's reply helped answer your questions, click the 'Accept as Solution' button. If not, reply to keep the conversation going. 
Version 9.2.2 seems to have solved this issue.   I know have Splunk Enterprise and Splunkforwarder running on the same server in three separate environments. 
I know it's an old post, but it helped me , but it leaves `[0;m` behind, which is 'Reset' I believe  
If they are intended to be Stand-Alone Machine Agent, insert a line in the Controller-Info.xml : <application name> {TypeAppNameHereAsSeeninAppDController} </application Name> Restart Machine Agent... See more...
If they are intended to be Stand-Alone Machine Agent, insert a line in the Controller-Info.xml : <application name> {TypeAppNameHereAsSeeninAppDController} </application Name> Restart Machine Agent service.
You're right.  It is still showing same amount in every interval. Thanks a lot.
Hi thanks but the problem is they are not from the same events as they are separate    
Hi @sintjm , if they are integers or they are in epochtime, you can calculate the difference using eval command: <your_search> | eval diff=Resp_time-Req_time Ciao. Giuseppe  
Hi @kp_pl , sorry but I don't understand your request: perc75(tt) is one of the calculated values, so why do you want to add a new column? Could you share how you are waiting for results? Ciao. ... See more...
Hi @kp_pl , sorry but I don't understand your request: perc75(tt) is one of the calculated values, so why do you want to add a new column? Could you share how you are waiting for results? Ciao. Giuseppe
Hi @Silah , yes you can create two different stanzas, one for each sender with different indexes. The only question is: why? usually index are choosen when you have different retentions or differe... See more...
Hi @Silah , yes you can create two different stanzas, one for each sender with different indexes. The only question is: why? usually index are choosen when you have different retentions or different access grants, not different sources or technologies. Different sources are recognized in the same index by host and different technologies are recognized by sourcetype. Ciao. Giuseppe