All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@bowesmana    I've used this code to check on the values <html> <h3>Token values</h3> <table border="0" cellpadding="12" cellspacing="0"> <tr> <td> ... See more...
@bowesmana    I've used this code to check on the values <html> <h3>Token values</h3> <table border="0" cellpadding="12" cellspacing="0"> <tr> <td> <p> <b>Time range (epoch time)</b> </p> <p> <b>$$timeSelect1.earliest$$</b>: $timeSelect1.earliest$<br/> <b>$$timeSelect1.latest$$</b>: $timeSelect1.latest$<br/> <b>$$timeSelect1.earliest_ts$$</b>: $timeSelect1.earliest_ts$<br/> <b>$$timeSelect1.latest_ts$$</b>: $timeSelect1.latest_ts$<br/> <b>$$earliest$$</b>: $earliest$<br/> <b>$$latest$$</b>: $latest$<br/> <b>$$table_minutes_away$$</b>: $table_minutes_away$<br/> <b>$$drilldownsettingtimerange$$</b>: $drilldownsettingtimerange$<br/> </p> </td> <td> <p> <b>Filter Values</b> </p> <p> <b>$$displayUnits$$</b>: $displayUnits$<br/> <b>$$filterBy$$</b>: $filterBy$<br/> <b>$$removeParam$$</b>: $removeParam$<br/> <b>$$rankBy$$</b>: $rankBy$<br/> <b>$$filterNWCompletionCode$$</b>: $filterNWCompletionCode$<br/> <b>$$filterNWPredictionCode$$</b>: $filterNWPredictionCode$<br/> <b>$$useLoad$$</b>: $useLoad$<br/> <b>$$showLive$$</b>: $showLive$<br/> </p> </td> <td> <p> <b>Click Values</b> </p> <p> <b>$$job.runDuration$$</b>: $job.runDuration$<br/> <b>$$whereclick$$</b>: $whereclick$<br/> <b>$$clickname$$</b>: $click.name$<br/> <b>$$clickvalue$$</b>: $click.value$<br/> <b>$$clickname2$$</b>: $click.name2$<br/> <b>$$clickvalue2$$</b>: $click.value2$<br/> <b>$$clickearly$$</b>: $clickearly$<br/> <b>$$clicklate$$</b>: $clicklate$<br/> <b>$$row.duration$$</b>: $row.duration$<br/> <b>$$rownode$$</b>: $rownode$</p> </td> <td> <p> <b>Search Values</b> </p> <p> <b>$$radar|s$$</b>: $radar|s$<br/> <b>$$form.pre_source$$</b>: $form.pre_source$<br/> <b>$$form.source|s$$</b>: $form.source|s$<br/> <b>$$source$$</b>: $source$<br/> <b>$$form.terms|s$$</b>: $form.terms|s$<br/> <b>$$terms$$</b>: $terms$<br/> <b>$$subsystems|s$$</b>: $subsystems|s$<br/> <b>$$categories|s$$</b>: $categories|s$<br/> <b>$$hidden$$</b>: $hidden$<br/> <b>$$mlModelsAvailable$$</b>: $mlModelsAvailable$</p> </td> <td> <p> <b>Search Values</b> </p> <p> <b>$$search_results$$</b>: $search_results$<br/> </p> </td> </tr> </table> </html>   This is what I see when I clicked on any parts of the stacked charts (no difference in values)  
index="jenkins_console" source="*-deploy/*" NOT (source="*/gremlin-fault-injection-deploy/*" OR source="*pipe-test*" OR source="*java-validation-*") ("Approved by" OR "*Finished:*") | fields source |... See more...
index="jenkins_console" source="*-deploy/*" NOT (source="*/gremlin-fault-injection-deploy/*" OR source="*pipe-test*" OR source="*java-validation-*") ("Approved by" OR "*Finished:*") | fields source | stats count(eval(match(_raw, "Approved by"))) as count_approved, count(eval(match(_raw, ".*Finished:*."))) as count_finish by source | where count_approved > 0 AND count_finish > 0 | stats dc(source) as Total | appendcols [ search(index="jenkins_console" source="*-deploy/*" NOT (source="*/gremlin-fault-injection-deploy/*" OR source="*pipe-test*" OR source="*java-validation-*") ("Finished: UNSTABLE" OR "Finished: SUCCESS" OR "Approved by" OR "Automatic merge*" OR "pushed branch tip is behind its remote" OR "WARNING: E2E tests did not pass")) | fields source host | stats count(eval(match(_raw, "Approved by"))) as count_approved, count(eval(match(_raw, "Finished: SUCCESS"))) as count_success, count(eval(match(_raw, "Finished: UNSTABLE"))) as count_unstable, count(eval(match(_raw, "Automatic merge.*failed*."))) as count_merge_fail, count(eval(match(_raw, "WARNING: E2E tests did not pass"))) as count_e2e_failure, count(eval(match(_raw, "pushed branch tip"))) as count_branch_fail by source, host | where count_approved > 0 AND (count_success > 0 OR (count_unstable > 0 AND (count_merge_fail > 0 OR count_branch_fail > 0 OR count_e2e_failure > 0))) | stats dc(source) as success ] | stats avg(success) as S, avg(Total) as T | eval percentage=( S / T * 100) | fields percentage,success, Total
That was my omission. (Syntax is explained in lookup.)  I assume that you are trying to match "host" field in the following. (Also, you need to control letter case in the table to all-lower case. | ... See more...
That was my omission. (Syntax is explained in lookup.)  I assume that you are trying to match "host" field in the following. (Also, you need to control letter case in the table to all-lower case. | metasearch (index=os_* OR index=perfmon_*) | dedup host | eval host=lower(host) ```| eval eventTime=_time | convert timeformat="%Y/%m/%d %H:%M:%S" ctime(eventTime) AS LastEventTime | fields host eventTime LastEventTime index ^^^ the above is not calculated or used ``` | lookup host_lookup host output host AS matchhost | append [inputlookup host_lookup | rename host AS tablehost] | eventstats values(matchhost) as matchhost | eval Action = if(tablehost IN matchhost, "Keep Host", "Remove from Lookup")  
Hi @Michael.Medel, Have a read of this TKB article and see if it helps out - https://community.appdynamics.com/t5/Knowledge-Base/Understanding-Synthetic-Visual-metrics-vs-BRUM-metrics-for-page/ta-p... See more...
Hi @Michael.Medel, Have a read of this TKB article and see if it helps out - https://community.appdynamics.com/t5/Knowledge-Base/Understanding-Synthetic-Visual-metrics-vs-BRUM-metrics-for-page/ta-p/31284
Hi @David.Machacek, This thread has not been active since 2018, I think it would be good for you to re-ask this question on the forums as a new question giving as much context and detail as possibl... See more...
Hi @David.Machacek, This thread has not been active since 2018, I think it would be good for you to re-ask this question on the forums as a new question giving as much context and detail as possible. It will get more visibility across the community that wat.
o365 addon has been running fine. Token expired on the Azure side, so I generated a new one. Updating the Splunk addon gives me the error "Only letters, numbers and underscores are supported." and ... See more...
o365 addon has been running fine. Token expired on the Azure side, so I generated a new one. Updating the Splunk addon gives me the error "Only letters, numbers and underscores are supported." and highlights the Tenant Subdomain or Tenant Data Center fields (see attachment). I can't complete the update without values in these fields. Not sure what to do here.
Interesting. Can you provide a run anywhere query example of how I would do the comparison?
Sounds like you need some sort of baseline for each service/customer - I would suggest you store this in a summary index. You can then retrieve the relevant stat from the summary index and compare it... See more...
Sounds like you need some sort of baseline for each service/customer - I would suggest you store this in a summary index. You can then retrieve the relevant stat from the summary index and compare it to you current actual to determine if it is anomalous. You could also look at the MLTK, however, for this sort of analysis, you may end up with multiple models for each service/customer combination, which becomes quite unwieldy.
Consider using SEDCMD in props.conf to remove the unwanted fields, assuming the fields can be described using a regular expression.  For simplicity, use a separate SEDCMD for each field. For example... See more...
Consider using SEDCMD in props.conf to remove the unwanted fields, assuming the fields can be described using a regular expression.  For simplicity, use a separate SEDCMD for each field. For example, assuming the data looked like "...user_watchlist=foo, ip_options=bar, ..." SEDCMD-no-user_watchlist = s/user_watchlist=[^,],+// SEDCMD-no-ip_options = s/ip_options=[^,]+,//  
Each service/customer has different usage patterns...some are "normally" less busy than others. So I thought there is a way to identity when something does not follow the "normal" pattern witout putt... See more...
Each service/customer has different usage patterns...some are "normally" less busy than others. So I thought there is a way to identity when something does not follow the "normal" pattern witout putting in static thresholds? So for the service/customer that is normally slower the threshold will be less than a busier service/customer. If svcA-custA normally has 2000 events a day and svcA-custB has only 100 events a day the thresholds will be different.
No. If function has a different syntax. See https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/ConditionalFunctions#if.28.26lt.3Bpredicate.26gt.3B.2C.26lt.3Btrue_value.26gt.3B.2C.26l... See more...
No. If function has a different syntax. See https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/ConditionalFunctions#if.28.26lt.3Bpredicate.26gt.3B.2C.26lt.3Btrue_value.26gt.3B.2C.26lt.3Bfalse_value.26gt.3B.29 As for converting it to timechart, you can simply do | timechart span=1h  count as total sum(SuccessFE) as SuccessFe by UserAgent instead of stats This will however give you a table which you'll have to firstly fillnull (because the sum might be empty) and then... hmm... probably use foreach to calculate ratio for each UserAgent separately. Or do untable and then stats. Probably the latter approach is easier.
Hi @richgalloway I think this is the point. What we're saying / agreeing is that there is no requirement for a unique 'capture group name' effectively the two regex field values 'coalesce' and quite ... See more...
Hi @richgalloway I think this is the point. What we're saying / agreeing is that there is no requirement for a unique 'capture group name' effectively the two regex field values 'coalesce' and quite tidily in the  instance that I have tested. This is a surprise and was not at all clear and actually lends flexibility.  
The highlighted text in my screen shot shows where the Admin manual says a capturing group is required in EXTRACT.  It does not say the group name must be unique because that is not a requirement.  A... See more...
The highlighted text in my screen shot shows where the Admin manual says a capturing group is required in EXTRACT.  It does not say the group name must be unique because that is not a requirement.  Although a given regex may fail if the same group name is used more than once, the same group name may be used in multiple EXTRACT settings.
@Roy_9 - Give me your exact search please.
hi @richgalloway , thanks for replying. Let me be clearer, I am extracting SSID twice using the named capture group in both instances is 'SSID' per btool. What has surprised me and I can't see liste... See more...
hi @richgalloway , thanks for replying. Let me be clearer, I am extracting SSID twice using the named capture group in both instances is 'SSID' per btool. What has surprised me and I can't see listed is the requirement for a Unique capturing group name.
Correct it to do what?  What are you expecting as output? The stats command is grouping by the host field, which doesn't exist.  In that scenario, stats will produce no output.  The host field was d... See more...
Correct it to do what?  What are you expecting as output? The stats command is grouping by the host field, which doesn't exist.  In that scenario, stats will produce no output.  The host field was dropped by the timechart command.  Fix that by adding "by host" to the timechart command. Next, you'll find stats can't compute an average because the field specified, loadAvg1mipercore, is null.  The field is null because the eval that created it uses a field, loadAvg1mi, that doesn't exist. Here's an attempt to "correct" the search.  Whether or not it produces the desired and/or right output I don't know. index=os OR index=linux sourcetype=vmstat OR source=iostat [| input lookup SEI-build_server_lookup.csv where platform=eid_rhel6 AND where NOT (role-code-sonar) | fields host | format ] | rex field=host (?<host>\w+)?\..+" | timechart avg(avgWaitMillis) as loadAvg1mi by host | eval cores=4 | eval loadAvg1mipercore=loadAvg1mi/cores | stats avg(loadAvg1mipercore) as load by host  
ok, got it, thanks but how can i change it to timechart ? because when im running this : index=clientlogs sourcetype=clientlogs Categories="*networkLog*" "Request.url"="*v3/auth*" Request.url!=*t... See more...
ok, got it, thanks but how can i change it to timechart ? because when im running this : index=clientlogs sourcetype=clientlogs Categories="*networkLog*" "Request.url"="*v3/auth*" Request.url!=*twofactor* "Request.actionUrl"!="*dev*" AND "Request.actionUrl"!="*staging*" | eval SuccessFE=if(('Request.status' IN (201, 207) ),1,0) | eval UserAgent = case(match(UserAgent, ".*ios.*"), "iOS FE",match(UserAgent, ".*android.*"), "Android FE",1=1, "Web FE") | stats count as total sum(SuccessFE) as SuccessFE by UserAgent | eval SuccessRatioFE = (SuccessFE/total)*100 | chart bins=100 values(SuccessRatioFE) over _time by UserAgent I get no results while if im running a table i see results also, i cannot add the NOT filter I need inside the if statement : NOT "Request.data.twoFactor.otp.expiresInMs"="*"  
Hi @yohhpark  If your data is following the format AA-1-a, AA-1-b and so on, then instead of making the value for your dropdown option being AA-1*, make it AA-1-*. This will show everything for only... See more...
Hi @yohhpark  If your data is following the format AA-1-a, AA-1-b and so on, then instead of making the value for your dropdown option being AA-1*, make it AA-1-*. This will show everything for only the AA-1- leading characters and would not show AA-10... or AA-11... Again, this would only work if the data does follow the format with the dash (-) at separator AA-1-...
I think this is all a "non issue" as it relates to Splunk.... read on:  if you search for this error as posted by the OP, you will see faults generated by Splunk. To illustrate, use this ugly query (... See more...
I think this is all a "non issue" as it relates to Splunk.... read on:  if you search for this error as posted by the OP, you will see faults generated by Splunk. To illustrate, use this ugly query (don't do this at home, this for demonstration purposes only): index=*wineventlog "faulting" "*splunk*" HOWEVER, take off the "*splunk*" from this, and you'll see that hundreds of other apps are doing this too. I'm calling this a Windows issue, not a Splunk issue.   FWIW:  version 9.1.1.
The capture group name is indeed mentioned.