All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

we need to set up an alert if a server no java process for 15mins, only one alert was sent until the issue was solved. Do we need to create 2 windows for this? | mstats count(os.cpu.pct.used) as c w... See more...
we need to set up an alert if a server no java process for 15mins, only one alert was sent until the issue was solved. Do we need to create 2 windows for this? | mstats count(os.cpu.pct.used) as c where index=cpet-os-metrics host_ip IN (10.0.0.1,10.0.0.2) by  host_ip | join host type=left     [| mstats avg(ps_metric.pctMEM) as avg_mem_java avg(ps_metric.pctCPU) as avg_cpu_java count(ps_metric.pctMEM) as ct_java_proc where index=cpet-os-metrics host_ip IN (10.0.0.1,10.0.0.2) sourcetype=ps_metric COMMAND=java by host host_ip COMMAND USER ] | fields - c | eval is_java_running = if(ct_java_proc>0, 1, 0)
Your link leads to wrong documentation (but for some strange reason Google seems to favour it over the proper SPL documentation). There are two different search languages - SPL and SPL2. SPL is used ... See more...
Your link leads to wrong documentation (but for some strange reason Google seems to favour it over the proper SPL documentation). There are two different search languages - SPL and SPL2. SPL is used within Splunk Enterprise (and Splunk Cloud), SPL2 is used here and there (I think most notable use is the Edge Processor) but it's not as widely used as SPL. I know it's confusing Anyway, you need docs for SPL, not SPL2. https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/WhatsInThisManual
OK. So modifications to Domain Admins group is reflected with different events that 4743. So you seem to have a different problem than just losing one particular eventid.
Ok, been looking at this. I did not realize values(*) by * meant to 'look at everything' - I had just seen it in other examples! I see now that you can do something like this to get just the rows... See more...
Ok, been looking at this. I did not realize values(*) by * meant to 'look at everything' - I had just seen it in other examples! I see now that you can do something like this to get just the rows you want: index="my_data" resourceId="enum*" | stats values(sourcenumber) as sourcenumber values(destinationnumber) as destinationnumber by guid   Never did I imagine this is FASTER, because I guess I just thought the first line meant it would get all the data anyway.  This is a huge help and going to play with it for awhile to get a feel of it (it appears to also remove my need to use mvdedup) In reading the Indexed Extractions, it notes that you can extract at search (which we are doing above) or 'add custom fields at index' so the latter is what we're talking about doing.  At first glance I don't see the performance bump just extracting because I will still have two separate streams (what I called DS1 and DS2). I think what would be a boon on performance is if I could consolidate the two streams of data into a new stream that had 'just the 12 fields I need' but that feels like a different thread! This is GREAT INFORMATION and THANK YOU SO MUCH!!!!  
more screen shot sure would help - where is that? I can see stuff like Edit Description/Permissions/etc. but not Edit Summary Indexing
Hello @Dishant.Mokal , By default, the Machine Agent only monitors containers that have a running APM Agent. You can change this by setting the  sim.docker.monitorAPMContainersOnly  property on the... See more...
Hello @Dishant.Mokal , By default, the Machine Agent only monitors containers that have a running APM Agent. You can change this by setting the  sim.docker.monitorAPMContainersOnly  property on the Controller. See Controller Settings for Server Visibility. That's why you are not seeing all the containers running on the docker host. Doc ref: https://docs.appdynamics.com/appd/21.x/21.5/en/infrastructure-visibility/monitor-containers-with-docker-visibility If it is a SaaS controller you should raise a Support ticket to enable or disable the property so that we will work with our OPS team. Best Regards, Rajesh Ganapavarapu
Hello all, I send some logs from multiple endpoints to a standalone Splunk HTTP Event Collector. Many logs are sent successfully but some of them (same index, same endpoint, ...) . But some of them ... See more...
Hello all, I send some logs from multiple endpoints to a standalone Splunk HTTP Event Collector. Many logs are sent successfully but some of them (same index, same endpoint, ...) . But some of them get 403 when sending and are not sent. I think maybe it's for threads or sockets. Any ideas are appreciated
I want to calculate the Percentage of status code for 200 out of Total counts of Status code by time. I have written query as per below by using append cols. Below query is working but it is not givi... See more...
I want to calculate the Percentage of status code for 200 out of Total counts of Status code by time. I have written query as per below by using append cols. Below query is working but it is not giving percentage every minute or by _time wise. I want this Percentage of status code for 200 by _time also. So can anybody help me out on this how to write this query. index=* sourcetype=* host=* | stats count(sc_status) as Totalcount | appendcols [ search index=* sourcetype=* host=* sc_status=200 | stats count(sc_status) as Count200 ] | eval Percent200=Round((Count200/Totalcount)*100,2) | fields _time Count200 Totalcount Percent200
Hi all, I am trying to authenticate a user against REST API but when testing via CURL, it is failing when using LB URL(F5). User has replicated across all SHC members and can login via UI. # curl -... See more...
Hi all, I am trying to authenticate a user against REST API but when testing via CURL, it is failing when using LB URL(F5). User has replicated across all SHC members and can login via UI. # curl -k https://Splunk-LB-URL:8089/services/auth/login -d username=user -d password='password' <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="WARN" code="incorrect_username_or_password">Login failed</msg> </messages> </response>   But when I try this same against the SH member directly, it works. # curl -k https://Splunk-SearchHead:8089/services/auth/login -d username=user -d password='password' <response> <sessionKey>gULiq_E7abGyEchXyw7rxzwi83Fhdh8gIGjPGBouFUd37GuXF</sessionKey> <messages> <msg code=""></msg> </messages> </response>   Initially I thought it could be something on the LB side but then for "admin" user, LB URL works just fine.  # curl -k https://Splunk-LB-URL:8089/services/auth/login -d username=admin -d password='password' <response> <sessionKey>gULiq_E7abGyEchXyw7rxzwi83Fhdh8gIGjPGBouFUd37GuXF</sessionKey> <messages> <msg code=""></msg> </messages> </response>   Has anyone come across issue like this? Why would admin work fine on LB but a new local user works only against direct SH and not via load balancer? 
Hello guys Hope you are doing great! I want to configure a query, some guys are disabled in AD and also, in Splunk ES when i open the Identity Investigatior it is showing also a disabled (cn=*,ou=d... See more...
Hello guys Hope you are doing great! I want to configure a query, some guys are disabled in AD and also, in Splunk ES when i open the Identity Investigatior it is showing also a disabled (cn=*,ou=disabled,ou=united,ou=accounts,dc=global,dc=ual,dc=com) But in users it showing his role on under the roles but it should be need to sho as no_access,  Now I want build a query and create a alert Can you please help me on this  Ani
There's almost always more than one way to do something in SPL, but why take the hard road? <<your current search>> | appendpipe [stats sum(cap) as cap, sum(login) as login | eval location... See more...
There's almost always more than one way to do something in SPL, but why take the hard road? <<your current search>> | appendpipe [stats sum(cap) as cap, sum(login) as login | eval location="AM05"]
You may want to walk back the SPL and see on which line results are getting dropped off.  Is it the initial tstats callout? Just by looking at it the where clause in the tstats looks a bit strange, ... See more...
You may want to walk back the SPL and see on which line results are getting dropped off.  Is it the initial tstats callout? Just by looking at it the where clause in the tstats looks a bit strange, but hard to say without seeing what the contents of the internal_ranges.csv lookup are. Does a query like this pull back any results? | tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_ip) as dest_ip, dc(All_Traffic.dest_ip) as unique_dest_ips, values(All_Traffic.dest_port) as dest_port, values(All_Traffic.action) as action, values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic where All_Traffic.src_ip IN ("10.0.0.0/8", "192.168.0.0/16", "172.16.0.0/12") AND NOT All_Traffic.dest_ip IN ("10.0.0.0/8", "192.168.0.0/16", "172.16.0.0/12") AND All_Traffic.dest_port=445 by _time All_Traffic.src_ip span=5m | rename All_Traffic.* as *  Just switched up the tstats where filter a bit to src_ip is internal IP and dest_ip is external_ip which I think is what you described in the original post.
Is there any other ways like using eval, append commands? @richgalloway 
Can someone please help me with this rule? I have been assigned to create a bunch of similar rules but I am struggling with a few, this is what I have so far... =====================================... See more...
Can someone please help me with this rule? I have been assigned to create a bunch of similar rules but I am struggling with a few, this is what I have so far... ======================================== Strategy Abstract The strategy will function as follows: Utilize tstats to summarize SMB traffic data. Identify internal hosts scanning for open SMB ports outbound to external hosts. Technical Context This rule focuses on detecting abnormal outbound SMB traffic. =============================================================================== SPL is generating 0 errors but also 0 matches.    | tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_ip) as dest_ip dc(All_Traffic.dest_ip) as unique_dest_ips values(All_Traffic.dest_port) as dest_port values(All_Traffic.action) as action values(sourcetype) as sourcetype     from datamodel=Network_Traffic.All_Traffic     where (All_Traffic.src_ip [inputlookup internal_ranges.csv | table src_ip] OR NOT All_Traffic.dest_ip [ inputlookup internal_ranges.csv | table dest_ip]) AND All_Traffic.dest_port=445     by _time All_Traffic.src_ip span=5m  | `drop_dm_object_name(All_Traffic)`  | where unique_dest_ips>=50 | search NOT [ | inputlookup scanners.csv | table ip | rename ip as src_ip] | search NOT src_ip = "x.x.x.x" | head 51      
Hi @Narendar Reddy.Mudiyala, The idea can be found here: https://community.appdynamics.com/t5/Idea-Exchange/Individual-Pod-Restart-alerts/idi-p/51177 As of now, there has been no official update ... See more...
Hi @Narendar Reddy.Mudiyala, The idea can be found here: https://community.appdynamics.com/t5/Idea-Exchange/Individual-Pod-Restart-alerts/idi-p/51177 As of now, there has been no official update on the Idea Exchange post. 
Try the addcoltotals command. <<your current query>> | addcoltotals labelfield=location label="AM05"  
Many thanks indeed dtburrows3, this is EXACTLY what I was looking for!
I think an eval expression like this would do it.         | eval targeted_component=case( mvcount('triggeredComponents{}.triggeredFilters{}.trigger.value')==1, if(match('trig... See more...
I think an eval expression like this would do it.         | eval targeted_component=case( mvcount('triggeredComponents{}.triggeredFilters{}.trigger.value')==1, if(match('triggeredComponents{}.triggeredFilters{}.trigger.value', "\w+\s*\/\s*\w+(?:\s+\w+)*"), 'triggeredComponents{}.triggeredFilters{}.trigger.value', null()), mvcount('triggeredComponents{}.triggeredFilters{}.trigger.value')>1, mvmap('triggeredComponents{}.triggeredFilters{}.trigger.value', if(match('triggeredComponents{}.triggeredFilters{}.trigger.value', "\w+\s*\/\s*\w+(?:\s+\w+)*"), 'triggeredComponents{}.triggeredFilters{}.trigger.value', null())) )         and the output should look something like this. Using the mvmap function, we loop through each entry of the multivalue field and check if the entry matches a specified regex pattern. If there is a match then we take the value of that entry and insert it into a new field. This new field can potentially also be multivalued, depending on if there are multiple entries from the original field that match the criteria. and for the stats command part I guess you can just use the newly derived field as a stats by-field to get counts (or whatever kind of stats aggregation is needed) | stats count as count by targeted_component  
Hello, Today, we made modifications to Domain Admin groups, for which we had previously enabled Notables. The issue is that I haven't received any alerts related to it, and the events have not bee... See more...
Hello, Today, we made modifications to Domain Admin groups, for which we had previously enabled Notables. The issue is that I haven't received any alerts related to it, and the events have not been collected in Splunk yet. Here is the services snapshot for the Universal Forwarder from that domain controller: Do we need to make any changes pls let me know  Thanks