All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Interesting. Can you provide a run anywhere query example of how I would do the comparison?
Sounds like you need some sort of baseline for each service/customer - I would suggest you store this in a summary index. You can then retrieve the relevant stat from the summary index and compare it... See more...
Sounds like you need some sort of baseline for each service/customer - I would suggest you store this in a summary index. You can then retrieve the relevant stat from the summary index and compare it to you current actual to determine if it is anomalous. You could also look at the MLTK, however, for this sort of analysis, you may end up with multiple models for each service/customer combination, which becomes quite unwieldy.
Consider using SEDCMD in props.conf to remove the unwanted fields, assuming the fields can be described using a regular expression.  For simplicity, use a separate SEDCMD for each field. For example... See more...
Consider using SEDCMD in props.conf to remove the unwanted fields, assuming the fields can be described using a regular expression.  For simplicity, use a separate SEDCMD for each field. For example, assuming the data looked like "...user_watchlist=foo, ip_options=bar, ..." SEDCMD-no-user_watchlist = s/user_watchlist=[^,],+// SEDCMD-no-ip_options = s/ip_options=[^,]+,//  
Each service/customer has different usage patterns...some are "normally" less busy than others. So I thought there is a way to identity when something does not follow the "normal" pattern witout putt... See more...
Each service/customer has different usage patterns...some are "normally" less busy than others. So I thought there is a way to identity when something does not follow the "normal" pattern witout putting in static thresholds? So for the service/customer that is normally slower the threshold will be less than a busier service/customer. If svcA-custA normally has 2000 events a day and svcA-custB has only 100 events a day the thresholds will be different.
No. If function has a different syntax. See https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/ConditionalFunctions#if.28.26lt.3Bpredicate.26gt.3B.2C.26lt.3Btrue_value.26gt.3B.2C.26l... See more...
No. If function has a different syntax. See https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/ConditionalFunctions#if.28.26lt.3Bpredicate.26gt.3B.2C.26lt.3Btrue_value.26gt.3B.2C.26lt.3Bfalse_value.26gt.3B.29 As for converting it to timechart, you can simply do | timechart span=1h  count as total sum(SuccessFE) as SuccessFe by UserAgent instead of stats This will however give you a table which you'll have to firstly fillnull (because the sum might be empty) and then... hmm... probably use foreach to calculate ratio for each UserAgent separately. Or do untable and then stats. Probably the latter approach is easier.
Hi @richgalloway I think this is the point. What we're saying / agreeing is that there is no requirement for a unique 'capture group name' effectively the two regex field values 'coalesce' and quite ... See more...
Hi @richgalloway I think this is the point. What we're saying / agreeing is that there is no requirement for a unique 'capture group name' effectively the two regex field values 'coalesce' and quite tidily in the  instance that I have tested. This is a surprise and was not at all clear and actually lends flexibility.  
The highlighted text in my screen shot shows where the Admin manual says a capturing group is required in EXTRACT.  It does not say the group name must be unique because that is not a requirement.  A... See more...
The highlighted text in my screen shot shows where the Admin manual says a capturing group is required in EXTRACT.  It does not say the group name must be unique because that is not a requirement.  Although a given regex may fail if the same group name is used more than once, the same group name may be used in multiple EXTRACT settings.
@Roy_9 - Give me your exact search please.
hi @richgalloway , thanks for replying. Let me be clearer, I am extracting SSID twice using the named capture group in both instances is 'SSID' per btool. What has surprised me and I can't see liste... See more...
hi @richgalloway , thanks for replying. Let me be clearer, I am extracting SSID twice using the named capture group in both instances is 'SSID' per btool. What has surprised me and I can't see listed is the requirement for a Unique capturing group name.
Correct it to do what?  What are you expecting as output? The stats command is grouping by the host field, which doesn't exist.  In that scenario, stats will produce no output.  The host field was d... See more...
Correct it to do what?  What are you expecting as output? The stats command is grouping by the host field, which doesn't exist.  In that scenario, stats will produce no output.  The host field was dropped by the timechart command.  Fix that by adding "by host" to the timechart command. Next, you'll find stats can't compute an average because the field specified, loadAvg1mipercore, is null.  The field is null because the eval that created it uses a field, loadAvg1mi, that doesn't exist. Here's an attempt to "correct" the search.  Whether or not it produces the desired and/or right output I don't know. index=os OR index=linux sourcetype=vmstat OR source=iostat [| input lookup SEI-build_server_lookup.csv where platform=eid_rhel6 AND where NOT (role-code-sonar) | fields host | format ] | rex field=host (?<host>\w+)?\..+" | timechart avg(avgWaitMillis) as loadAvg1mi by host | eval cores=4 | eval loadAvg1mipercore=loadAvg1mi/cores | stats avg(loadAvg1mipercore) as load by host  
ok, got it, thanks but how can i change it to timechart ? because when im running this : index=clientlogs sourcetype=clientlogs Categories="*networkLog*" "Request.url"="*v3/auth*" Request.url!=*t... See more...
ok, got it, thanks but how can i change it to timechart ? because when im running this : index=clientlogs sourcetype=clientlogs Categories="*networkLog*" "Request.url"="*v3/auth*" Request.url!=*twofactor* "Request.actionUrl"!="*dev*" AND "Request.actionUrl"!="*staging*" | eval SuccessFE=if(('Request.status' IN (201, 207) ),1,0) | eval UserAgent = case(match(UserAgent, ".*ios.*"), "iOS FE",match(UserAgent, ".*android.*"), "Android FE",1=1, "Web FE") | stats count as total sum(SuccessFE) as SuccessFE by UserAgent | eval SuccessRatioFE = (SuccessFE/total)*100 | chart bins=100 values(SuccessRatioFE) over _time by UserAgent I get no results while if im running a table i see results also, i cannot add the NOT filter I need inside the if statement : NOT "Request.data.twoFactor.otp.expiresInMs"="*"  
Hi @yohhpark  If your data is following the format AA-1-a, AA-1-b and so on, then instead of making the value for your dropdown option being AA-1*, make it AA-1-*. This will show everything for only... See more...
Hi @yohhpark  If your data is following the format AA-1-a, AA-1-b and so on, then instead of making the value for your dropdown option being AA-1*, make it AA-1-*. This will show everything for only the AA-1- leading characters and would not show AA-10... or AA-11... Again, this would only work if the data does follow the format with the dash (-) at separator AA-1-...
I think this is all a "non issue" as it relates to Splunk.... read on:  if you search for this error as posted by the OP, you will see faults generated by Splunk. To illustrate, use this ugly query (... See more...
I think this is all a "non issue" as it relates to Splunk.... read on:  if you search for this error as posted by the OP, you will see faults generated by Splunk. To illustrate, use this ugly query (don't do this at home, this for demonstration purposes only): index=*wineventlog "faulting" "*splunk*" HOWEVER, take off the "*splunk*" from this, and you'll see that hundreds of other apps are doing this too. I'm calling this a Windows issue, not a Splunk issue.   FWIW:  version 9.1.1.
The capture group name is indeed mentioned.  
Hi @Arun, at first, I hint to open a different question, instead using an already closed one, this is better to you to have more answers. Anyway, did you disabled local firewall (firewalld) on Red ... See more...
Hi @Arun, at first, I hint to open a different question, instead using an already closed one, this is better to you to have more answers. Anyway, did you disabled local firewall (firewalld) on Red Hat? I didn't experience the issues you describe, check the local firewall and the grants on the user you're using. Ciao. Giuseppe
Hi @rferg06,  Be sure that your dataSources section, you are specifying the queryParameters with your dateRange token "queryParameters": { "earliest": "$dateRange.earliest$" ... See more...
Hi @rferg06,  Be sure that your dataSources section, you are specifying the queryParameters with your dateRange token "queryParameters": { "earliest": "$dateRange.earliest$" "latest": "$dateRange.latest$" }   You can also add a submit button to your "layout" section  so that the tokens will run when the submit button is clicked: "layout": { "options": { "submitButton": true, "height": 1750 } }   If you already have these in place, you may want to clear your splunk web cache by adding _bump to the end of your url.
Hi @inventsekar , as @isoutamo said, if you don't need a local indexing, you can use the Forwarder License (it was created just for this purpose!). Using this license, you have all the features of ... See more...
Hi @inventsekar , as @isoutamo said, if you don't need a local indexing, you can use the Forwarder License (it was created just for this purpose!). Using this license, you have all the features of a Splunk instance except indexing. In other words, you can preprocess (mainly parse) your data. In this way you can locate this job on HFs instead IDXs. Using the Forwarder License, you don't need to communicate with the LM, unless you want a local copy of your logs: in this case you need an unidirectional connection with the LM on 8089 port. Ports between HF and LM aren't relevant. On HF you need only: 9997 to send data to IDXs and 8089 to manage HF with your DS. Ciao. Giuseppe
No, it's not about "base search" functionality in dashboards. I'm not talking about that. I'm talking about the base search conceptually. But conceptually I'd do something like that: <your base sea... See more...
No, it's not about "base search" functionality in dashboards. I'm not talking about that. I'm talking about the base search conceptually. But conceptually I'd do something like that: <your base search> | eval success=if(your_criteria_for_success,1,0) | stats count as total sum(success) as susccessed by UserAgent | eval ratio=successed/total
Hello @gcusello ,   I've followed the same method of migration as you reffered here. But when the entire setup is migrated from CentOS to RHEL. From backend all looks working fine but just the web ... See more...
Hello @gcusello ,   I've followed the same method of migration as you reffered here. But when the entire setup is migrated from CentOS to RHEL. From backend all looks working fine but just the web UI does not comes up. I tried doing multiple things but now luck. - inputs and server. Conf has the same host name as of servers - port 800 never comes up to listening state. Other ports are working as expected like 8089.8088 etc -tired untaring the splunk package file on the new server as with the same version but no luck. The web log and splunk.log  files under opt/splunk/var/log/splunk directory also does not speak anything about the issue
Hello Thanks for you detailed answer What im trying to achieve is to calculate the ration between the attempts and the success Actually I've searched some details regarding the base search and how... See more...
Hello Thanks for you detailed answer What im trying to achieve is to calculate the ration between the attempts and the success Actually I've searched some details regarding the base search and how to use it in other parts of the query (also i want to use base search in other panel at the same dashboard) and i couldn't find any information I will really appreciate if you will be able to correct my query as im not familiar with different way to achieve the same. P.S - at the end, the final query should append the results from this query and results from similar query but for the BE part so the final query will be larger and heavier