All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have an upgraded instance of McAfee ePO 5.10 that is using the McAfee Add-on 2.2 with Splunk DB Connect 3.2. After the upgrade, the standard query that was previously used is now not sending the o... See more...
I have an upgraded instance of McAfee ePO 5.10 that is using the McAfee Add-on 2.2 with Splunk DB Connect 3.2. After the upgrade, the standard query that was previously used is now not sending the output as it was previously. Can you tell me if this can be resolved by having the DB connector upgraded to the latest version? And does anyone know if there is an update available to McAfee Add-on for Splunk
{"@timestamp":"2020-04-01T16:51:01.921Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.4.2",(deleted actvally event)"} {"@timestamp":"2020-04-01T16:51:01.921Z","@metadata":(deleted actv... See more...
{"@timestamp":"2020-04-01T16:51:01.921Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.4.2",(deleted actvally event)"} {"@timestamp":"2020-04-01T16:51:01.921Z","@metadata":(deleted actvally event) "}} {"@timestamp":"2020-04-01T16:51:01.921Z","@metadata"(deleted actvally event)}} Did tried multiple props: SHOULD_LINEMERGE=false TIME_FORMAT=%b %d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD=15 BREAK_ONLY_BEFORE_DATE=false #SHOULD_LINEMERGE=true #BREAK_ONLY_BEFORE = \w+\s\d+\s\d{2}:\d{2}:\d{2}\s[^\d] #BREAK_ONLY_BEFORE_DATE=true #TIME_FORMAT = %b %d %H:%M:%S #TRUNCATE = 50000 #MAX_EVENTS = 200 #SHOULD_LINEMERGE = false #LINE_BREAKER = ([\r\n]+)(?=\s*\{\s*\"timestam_ns\") #TIME_FORMAT = %s%9N #TIME_PREFIX = ^\s*\{\s*\"timestam_ns\" #MAX_TIMESTAMP_LOOKAHEAD = 20 #SHOULD_LINEMERGE=false #INDEXED_EXTRACTIONS=json ###DATETIME_CONFIG = current ###MUST_BREAK_AFTER = ^\w+\s+\w+ ##LINE_BREAKER=((?
or do I have to run a whole new query?
Is there a difference how to stop Splunk: using splunk stop or to send TERM signal. I'm using kuberntes and it sends TERM signal if it needs to stop a container therefore I need to understand t... See more...
Is there a difference how to stop Splunk: using splunk stop or to send TERM signal. I'm using kuberntes and it sends TERM signal if it needs to stop a container therefore I need to understand the difference between this two options.
Hi - We want to get users connected in 1 hour. When a user connects we get event_id="globalprotectgateway-auth-succ" in logs and when they disconnect then event_id is event_id="globalprotectgateway-l... See more...
Hi - We want to get users connected in 1 hour. When a user connects we get event_id="globalprotectgateway-auth-succ" in logs and when they disconnect then event_id is event_id="globalprotectgateway-logout-succ" . Each login in and logout should be counted as 1 event per hour but sometimes user just remains logged in and there is no logout even in that case we want the user to be counted as logged in user since they haven't logged out. (index=firewall OR index=cloud_firewall) eventtype="pan_system" log_subtype="globalprotect" sourcetype="pan:system" ( event_id="globalprotectgateway-auth-succ" OR event_id="globalprotectgateway-logout-succ" OR "globalprotectportal-gencookie-succ") | eval logins = strftime(_time,"%H") | transaction user startswith="globalprotectgateway-auth-succ" endswith= "globalprotectgateway-logout-succ" | stats values(user), distinct_count(user) by logins This works fine with login and logout for every 60 mins but if there is a user session that has not logged out then that user doesnt get counted in hourly user count.
While setting up Splunk Add On Builder 3.0.1 with Splunk 8.0.2.1 on Mac I am getting ImportError: Unable to load system certificate authority files error when I try to Test a REST API URL GET met... See more...
While setting up Splunk Add On Builder 3.0.1 with Splunk 8.0.2.1 on Mac I am getting ImportError: Unable to load system certificate authority files error when I try to Test a REST API URL GET method. However the same works fine on Windows. Has anyone run into similar issue and found a fix/workaround for the same?
Hi everyone, I am new to Splunk and still learning. Can someone please help me on the below query? My log file: 2020-03-30 12:21:45,075 INFO com.www.yyy.MyClass[ - ] - screen changing to ... See more...
Hi everyone, I am new to Splunk and still learning. Can someone please help me on the below query? My log file: 2020-03-30 12:21:45,075 INFO com.www.yyy.MyClass[ - ] - screen changing to [Select] 2020-03-30 12:25:31,574 DEBUG com.www.yyy.Manager[ - ] - Service- checking 2020-03-30 12:25:31,574 DEBUG com.www.yyy.Manager[ - ] - Service- found 2020-03-30 12:25:31,663 DEBUG com.www.yyy.Manager[ - ] - All Service took 89 milliseconds my requirement: I want to get the screen name and service took time. in the above example I need like this: "Select" screen services took 89 milliseconds Please help me to get the query. I would really appreciate it! Thank you!
Hello, I have the next query in an alert to check the status of 6 hosts: index=idx_nmon_data sourcetype=Perfmon:Memory eventtype=perfmon_memory | eval threshold=95 | where mem_used > thresho... See more...
Hello, I have the next query in an alert to check the status of 6 hosts: index=idx_nmon_data sourcetype=Perfmon:Memory eventtype=perfmon_memory | eval threshold=95 | where mem_used > threshold | table _time host mem_used threshold I would like that the alert is triggered when for two times in a row a specific server is above 95% of mem_used. And that in the email appears the next fields: _time host mem_used threshold I thought about two options but they dont match exactly what I want: - Do a: stats dc(_time) as times by host (in the search) and configure alert triggered when results are >1 >>>but in this case i lose information in the email of mem_used and _time, and I would like to see them in the table of the email - Inside the alert, as customized condition, to write: search dc(_time) by host > 1, but it does not work Anyone has othe ideas? or am i doing something wrong? I would like to maintain as well this is an only one query just to avoid consume the ressources of my search head server Thanks in advance Jaime
Hello! I'm tryng to get statistics of groups of 200 events. For instance, I have the following stats: |stats sum(CPU) avg(resptime) c as "total" sum(CPU)----------avg(resptime)------... See more...
Hello! I'm tryng to get statistics of groups of 200 events. For instance, I have the following stats: |stats sum(CPU) avg(resptime) c as "total" sum(CPU)----------avg(resptime)----------total 1000-----------------0.00240------------------800 What I wanted to have is: sum(CPU)----------avg(resptime)----------total 120-------------------0.00125------------------200 300-------------------0.00124------------------200 480-------------------0.00122------------------200 100-------------------0.00122------------------200 OBS. I know how to create bins of time span, but what I need is to make buckets based on event quantity and NOT time. Thank you in advance!
I'm standing up a 7.3.3 index cluster and I have a strange mystery. I've got the cluster master and search-heads happily forwarding away to the index cluster, and it shows exactly that in the list ... See more...
I'm standing up a 7.3.3 index cluster and I have a strange mystery. I've got the cluster master and search-heads happily forwarding away to the index cluster, and it shows exactly that in the list forward-server output. I'm starting to set up endpoints*, and I'm using the EXACT same outputs.conf and certs as I'm using on the master and search-heads, and data forwards happily and shows up in searches, but list forward-server shows: Active forwards: None Configured but inactive forwards: None Netstat on the forwarder shows that the request goes out to the master over :8089 as configured, but it is never answered, so it just sits at TIME_WAIT forever until the connection is killed: tcp 0 0 10.5.3.121:36314 10.10.84.16:8089 TIME_WAIT - Netstat on the master shows the connection request, but it also just says TIME_WAIT until it is killed. But clearly the forwarder is picking up the indexer discovery data somewhere, because it is forwarding to all 6 members of my cluster in rotation. I know it's not keeping a previous list like it would if the master went down, because it is a fresh install. The only in the logs on the forwarder except connections to the indexers and notes about log files is: 04-01-2020 11:04:22.632 -0400 INFO TcpOutputProc - Initialization time for indexer discovery service for default group=splunkssl has been completed. The master doesn't mention this forwarder at all in splunkd.log. I know that the forward-server list is a bit unnecessary, as the data is being ingested as it should be, but something is not right. The behaviour is the same whether the forwarder is using 8.0.2.1 or 7.0.13.1
Hi All, I hope everyone is staying healthy out there! I was wondering if anyone had some insight or use cases examples for leveraging Splunk to generate revenue internally at your companies. We all... See more...
Hi All, I hope everyone is staying healthy out there! I was wondering if anyone had some insight or use cases examples for leveraging Splunk to generate revenue internally at your companies. We all know Splunk (Splunk Core specifically) and it's value in regards to ROI. (reduced downtime, ease of access to data across all data types, reduced labor costs, etc) However, I'm looking for some examples to use Splunk to specifically generate revenue (with exception of the obvious, alerting, reporting, dashboards on source data from network, application, db, etc) and implement a charge-back model. I've already tested the charge-back application available on splunkbase, however looking for some possible new use cases to expand on our BAU use cases internally.
Hello, I am attempting to create a workflow action that allows a risk modifier to be adjusted. I have the command needed to adjust the risk modifier, however I need to token the risk score that I... See more...
Hello, I am attempting to create a workflow action that allows a risk modifier to be adjusted. I have the command needed to adjust the risk modifier, however I need to token the risk score that I need to adjust it by, this risk_score has to be generated earlier in the SPL that the makeresults is occuring in. What I have already, is a search that captures the risk score that needs to be tokenized to be put into the makeresults command to adjust their overall score. This is what I am needing help with. I have the following search | from datamodel:"Risk"."All_Risk" | rex field=_raw "incident_id=\"(?[^\"]*)" | fillnull incident_id value=null | search incident_id!=null | search risk_object="testuser" | table risk_object risk_score | sort - _time the output will give me risk_object=testuser risk_score=60 I then need to pipe out a makeresults command that will apply the risk_score value as a token into that makeresults command. The risk_score value needs to be generated and the token needs to be generated inside the the same SPL as workflow action that is applying the makeresults. Thank you
I'm newer of splunk. On my log I've a JSON with two fields of interested: "initialCreationDate":"2020-03-02T00:00:00","finalCreationDate":"2020-04-01T11:53:29" . My goal is take the count where the ... See more...
I'm newer of splunk. On my log I've a JSON with two fields of interested: "initialCreationDate":"2020-03-02T00:00:00","finalCreationDate":"2020-04-01T11:53:29" . My goal is take the count where the results have a range in between these fields. At this time I tried get only the first field and make a count using > at a String example. But it's not working. index=foo | rex field=raw "REQ=(?<REQ>[^}]+})" | spath input=REQ | eval n=strptime(REQ.initialCreationDate,"%Y-%m-%dT%H:%M:%S") | stats count by n > strptime("2020-03-26T00:00:00"). Log sample: [class] 2020-04-01 11:53:29,847 INFO [http-nio-80-exec-19] M=method, UA=ua, URI=/someUri, QS=limit=21&offset=0&sort=-createDate, RT=128, ET=100, ELAPSE-TIME=129, REQ={"userId":xxx,"initialCreationDate":"2020-03-02T00:00:00","finalCreationDate":"2020-04-01T11:53:29","source":"src","s":[0],"accounting":"C","consider":true}
Hi Team, i have onboarded the Linux CPU logs using Splunk add on for linux. the requirement is , we need send an alert when we hitting the CPU utilization more 80 % and count for the continuously... See more...
Hi Team, i have onboarded the Linux CPU logs using Splunk add on for linux. the requirement is , we need send an alert when we hitting the CPU utilization more 80 % and count for the continuously 3 times. Using streamstats command input is enabled for every 1200 seconds and alert will run every 30 mintues. Could you please help me to get the query.
Hello All, I am having issues incorporating the below condition with Splunk API. items.data.fed_id != \"\" OR items.institution_id != \"\" I am getting no results and no errors in the results v... See more...
Hello All, I am having issues incorporating the below condition with Splunk API. items.data.fed_id != \"\" OR items.institution_id != \"\" I am getting no results and no errors in the results via Splunk API. I am getting results through the Splunk UI.
I have data coming in to splunk from a SQL Table and one of the columns in the table has a XML. Is there a way we can parse that XML and extract fields in splunk?? The XML is not always the same a... See more...
I have data coming in to splunk from a SQL Table and one of the columns in the table has a XML. Is there a way we can parse that XML and extract fields in splunk?? The XML is not always the same and keeps changing
Hi, How do I write a regex to capture whenever I see any combination of 10 digits followed by .zip within a _raw event? eg: url=www.abcdef.com/1234532419.zip Thanks.
Hello I have use this command to convert from bytes to GB: | eval b = b /1024/1024/1024 and this is an example value as result: index1: 0.00000872090458869934 but the value is to long so I tried... See more...
Hello I have use this command to convert from bytes to GB: | eval b = b /1024/1024/1024 and this is an example value as result: index1: 0.00000872090458869934 but the value is to long so I tried to round using this instead: | foreach * [ eval <>=round('<>'/1024/1024/1024, 3)] but then I get this result: index1: 0.000 and I expect to get index1: 0.001 Can you suggest how to do that correctly so I get the expected result?
I have below log: Service ABCD(blabla_blabla): 365.45.1.87.3.60354 -> remote.234.5 Failure Service DERF(blabla_blabla): remote.567.9 -> remote.284.9 Failure and would like to catch with RegE... See more...
I have below log: Service ABCD(blabla_blabla): 365.45.1.87.3.60354 -> remote.234.5 Failure Service DERF(blabla_blabla): remote.567.9 -> remote.284.9 Failure and would like to catch with RegEx: a: 365.45.1.87.3.60354 b: remote.234.5 a: remote.567.9 b: remote.284.9 Thanks for help
| tstats count where index=proxy AND sourcetype=dns earliest=-7d by _time, ComputerName span=1h | xyseries _time, ComputerName, count So this is an actual field with an actual value and it isnt ... See more...
| tstats count where index=proxy AND sourcetype=dns earliest=-7d by _time, ComputerName span=1h | xyseries _time, ComputerName, count So this is an actual field with an actual value and it isnt loading into the search, any reason why?