All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello all, I send some logs from multiple endpoints to a standalone Splunk HTTP Event Collector. Many logs are sent successfully but some of them (same index, same endpoint, ...) . But some of them ... See more...
Hello all, I send some logs from multiple endpoints to a standalone Splunk HTTP Event Collector. Many logs are sent successfully but some of them (same index, same endpoint, ...) . But some of them get 403 when sending and are not sent. I think maybe it's for threads or sockets. Any ideas are appreciated
I want to calculate the Percentage of status code for 200 out of Total counts of Status code by time. I have written query as per below by using append cols. Below query is working but it is not givi... See more...
I want to calculate the Percentage of status code for 200 out of Total counts of Status code by time. I have written query as per below by using append cols. Below query is working but it is not giving percentage every minute or by _time wise. I want this Percentage of status code for 200 by _time also. So can anybody help me out on this how to write this query. index=* sourcetype=* host=* | stats count(sc_status) as Totalcount | appendcols [ search index=* sourcetype=* host=* sc_status=200 | stats count(sc_status) as Count200 ] | eval Percent200=Round((Count200/Totalcount)*100,2) | fields _time Count200 Totalcount Percent200
Hi all, I am trying to authenticate a user against REST API but when testing via CURL, it is failing when using LB URL(F5). User has replicated across all SHC members and can login via UI. # curl -... See more...
Hi all, I am trying to authenticate a user against REST API but when testing via CURL, it is failing when using LB URL(F5). User has replicated across all SHC members and can login via UI. # curl -k https://Splunk-LB-URL:8089/services/auth/login -d username=user -d password='password' <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="WARN" code="incorrect_username_or_password">Login failed</msg> </messages> </response>   But when I try this same against the SH member directly, it works. # curl -k https://Splunk-SearchHead:8089/services/auth/login -d username=user -d password='password' <response> <sessionKey>gULiq_E7abGyEchXyw7rxzwi83Fhdh8gIGjPGBouFUd37GuXF</sessionKey> <messages> <msg code=""></msg> </messages> </response>   Initially I thought it could be something on the LB side but then for "admin" user, LB URL works just fine.  # curl -k https://Splunk-LB-URL:8089/services/auth/login -d username=admin -d password='password' <response> <sessionKey>gULiq_E7abGyEchXyw7rxzwi83Fhdh8gIGjPGBouFUd37GuXF</sessionKey> <messages> <msg code=""></msg> </messages> </response>   Has anyone come across issue like this? Why would admin work fine on LB but a new local user works only against direct SH and not via load balancer? 
Hello guys Hope you are doing great! I want to configure a query, some guys are disabled in AD and also, in Splunk ES when i open the Identity Investigatior it is showing also a disabled (cn=*,ou=d... See more...
Hello guys Hope you are doing great! I want to configure a query, some guys are disabled in AD and also, in Splunk ES when i open the Identity Investigatior it is showing also a disabled (cn=*,ou=disabled,ou=united,ou=accounts,dc=global,dc=ual,dc=com) But in users it showing his role on under the roles but it should be need to sho as no_access,  Now I want build a query and create a alert Can you please help me on this  Ani
There's almost always more than one way to do something in SPL, but why take the hard road? <<your current search>> | appendpipe [stats sum(cap) as cap, sum(login) as login | eval location... See more...
There's almost always more than one way to do something in SPL, but why take the hard road? <<your current search>> | appendpipe [stats sum(cap) as cap, sum(login) as login | eval location="AM05"]
You may want to walk back the SPL and see on which line results are getting dropped off.  Is it the initial tstats callout? Just by looking at it the where clause in the tstats looks a bit strange, ... See more...
You may want to walk back the SPL and see on which line results are getting dropped off.  Is it the initial tstats callout? Just by looking at it the where clause in the tstats looks a bit strange, but hard to say without seeing what the contents of the internal_ranges.csv lookup are. Does a query like this pull back any results? | tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_ip) as dest_ip, dc(All_Traffic.dest_ip) as unique_dest_ips, values(All_Traffic.dest_port) as dest_port, values(All_Traffic.action) as action, values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic where All_Traffic.src_ip IN ("10.0.0.0/8", "192.168.0.0/16", "172.16.0.0/12") AND NOT All_Traffic.dest_ip IN ("10.0.0.0/8", "192.168.0.0/16", "172.16.0.0/12") AND All_Traffic.dest_port=445 by _time All_Traffic.src_ip span=5m | rename All_Traffic.* as *  Just switched up the tstats where filter a bit to src_ip is internal IP and dest_ip is external_ip which I think is what you described in the original post.
Is there any other ways like using eval, append commands? @richgalloway 
Can someone please help me with this rule? I have been assigned to create a bunch of similar rules but I am struggling with a few, this is what I have so far... =====================================... See more...
Can someone please help me with this rule? I have been assigned to create a bunch of similar rules but I am struggling with a few, this is what I have so far... ======================================== Strategy Abstract The strategy will function as follows: Utilize tstats to summarize SMB traffic data. Identify internal hosts scanning for open SMB ports outbound to external hosts. Technical Context This rule focuses on detecting abnormal outbound SMB traffic. =============================================================================== SPL is generating 0 errors but also 0 matches.    | tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_ip) as dest_ip dc(All_Traffic.dest_ip) as unique_dest_ips values(All_Traffic.dest_port) as dest_port values(All_Traffic.action) as action values(sourcetype) as sourcetype     from datamodel=Network_Traffic.All_Traffic     where (All_Traffic.src_ip [inputlookup internal_ranges.csv | table src_ip] OR NOT All_Traffic.dest_ip [ inputlookup internal_ranges.csv | table dest_ip]) AND All_Traffic.dest_port=445     by _time All_Traffic.src_ip span=5m  | `drop_dm_object_name(All_Traffic)`  | where unique_dest_ips>=50 | search NOT [ | inputlookup scanners.csv | table ip | rename ip as src_ip] | search NOT src_ip = "x.x.x.x" | head 51      
Hi @Narendar Reddy.Mudiyala, The idea can be found here: https://community.appdynamics.com/t5/Idea-Exchange/Individual-Pod-Restart-alerts/idi-p/51177 As of now, there has been no official update ... See more...
Hi @Narendar Reddy.Mudiyala, The idea can be found here: https://community.appdynamics.com/t5/Idea-Exchange/Individual-Pod-Restart-alerts/idi-p/51177 As of now, there has been no official update on the Idea Exchange post. 
Try the addcoltotals command. <<your current query>> | addcoltotals labelfield=location label="AM05"  
Many thanks indeed dtburrows3, this is EXACTLY what I was looking for!
I think an eval expression like this would do it.         | eval targeted_component=case( mvcount('triggeredComponents{}.triggeredFilters{}.trigger.value')==1, if(match('trig... See more...
I think an eval expression like this would do it.         | eval targeted_component=case( mvcount('triggeredComponents{}.triggeredFilters{}.trigger.value')==1, if(match('triggeredComponents{}.triggeredFilters{}.trigger.value', "\w+\s*\/\s*\w+(?:\s+\w+)*"), 'triggeredComponents{}.triggeredFilters{}.trigger.value', null()), mvcount('triggeredComponents{}.triggeredFilters{}.trigger.value')>1, mvmap('triggeredComponents{}.triggeredFilters{}.trigger.value', if(match('triggeredComponents{}.triggeredFilters{}.trigger.value', "\w+\s*\/\s*\w+(?:\s+\w+)*"), 'triggeredComponents{}.triggeredFilters{}.trigger.value', null())) )         and the output should look something like this. Using the mvmap function, we loop through each entry of the multivalue field and check if the entry matches a specified regex pattern. If there is a match then we take the value of that entry and insert it into a new field. This new field can potentially also be multivalued, depending on if there are multiple entries from the original field that match the criteria. and for the stats command part I guess you can just use the newly derived field as a stats by-field to get counts (or whatever kind of stats aggregation is needed) | stats count as count by targeted_component  
Hello, Today, we made modifications to Domain Admin groups, for which we had previously enabled Notables. The issue is that I haven't received any alerts related to it, and the events have not bee... See more...
Hello, Today, we made modifications to Domain Admin groups, for which we had previously enabled Notables. The issue is that I haven't received any alerts related to it, and the events have not been collected in Splunk yet. Here is the services snapshot for the Universal Forwarder from that domain controller: Do we need to make any changes pls let me know  Thanks    
Hi Community, I'm fairly inexperienced when it comes to anything other than quite basic searches, so my apologies in advance. I have a field which returns several values, and I only wish to return ... See more...
Hi Community, I'm fairly inexperienced when it comes to anything other than quite basic searches, so my apologies in advance. I have a field which returns several values, and I only wish to return one in my searches. The field name is "triggeredComponents{}.triggeredFilters{}.trigger.value" and it returns several values of different types, for example: 1 5 out text / text text text hostname1 hostname2 445 I only wish to retrieve and view the "text / text text text" value, and then pop that into a |stats command. Please can someone offer some advise? Many thanks in advance!
Yes, it would be specific to HEC clients that check for the endpoint's availability with tcp connections but not sending data.  This would not happen with HF because HTTP traffic would come in HF th... See more...
Yes, it would be specific to HEC clients that check for the endpoint's availability with tcp connections but not sending data.  This would not happen with HF because HTTP traffic would come in HF then be sent via S2S protocol to Indexers, which wouldnt do the checks like that.  sorry for the answer from far in the future  
Hi @Gunnar, thank you for your hint, in the _configtracker index there isn't any information about the user who did a change, and anyway isn't so well documented: I should search to understand event... See more...
Hi @Gunnar, thank you for your hint, in the _configtracker index there isn't any information about the user who did a change, and anyway isn't so well documented: I should search to understand events by myself, I'm searching for a documentation. Thank you again. Ciao. Giuseppe
I am trying to run this query but Splunk is complaining that the eval is malformed. https://docs.splunk.com/Documentation/SCS/current/SearchReference/EvalCommandExamples I am not sure from the docs... See more...
I am trying to run this query but Splunk is complaining that the eval is malformed. https://docs.splunk.com/Documentation/SCS/current/SearchReference/EvalCommandExamples I am not sure from the docs how to try to fix this. 
Using the Add-On builder i built a custom Python app to collect some asset information over API. I'll preface all of this by saying my custom Python code in VisCo works all the time, every time. n... See more...
Using the Add-On builder i built a custom Python app to collect some asset information over API. I'll preface all of this by saying my custom Python code in VisCo works all the time, every time. no hiccups. Using a select statement in the API request, I can gather specific fields. The more fields I define, the more issues I run into in Splunk. Basically it feels like the app is rate limited. I would expect it to run to just under an hour. It usually fails after 10 minutes and starts again at the 2 hour (7200 seconds) configured interval time on the input page. If I define fewer fields in the select request, it runs for a little longer but still ends up failing and obviously I'm not getting the data I want. If I set the bare minimum one field it runs for the expected time, stops, and starts again at its configured interval. I'm hesitant to say what platform but it is cloud based. I'm running my app from an on-prem heavy forwarder indexing to Splunk Cloud. The input interval config is 2 hours. The python script iterates through requests due to paging limitations and delays between requests based on some math I did with the total number of assets and pages. Its about 3 seconds between requests. But again, my code works flawlessly running in VisCo. That target API isn't rate limiting me due to the scripted interval. At least, I have no reason to believe that it is. I've opened a ticket with Splunk but I wanted to see if anyone else has experience with the Splunk Add-on Builder and the custom python modules.
Hi, maybe the _configtracker index can help. It would have old and new values for all configuration changes including changes made to user roles. BR! Gunnar