All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Community,   I'm working on a search for a dashboard panel and I need some help. I'm looking to get the owner, search_name, status_label, and the last comment. The search I have so far is b... See more...
Hello Community,   I'm working on a search for a dashboard panel and I need some help. I'm looking to get the owner, search_name, status_label, and the last comment. The search I have so far is below: `notable` | where owner =="User1" OR owner=="User2" OR owner=="User3" OR owner=="User4" OR owner=="User5" OR owner=="User6" | where status_label=="Ready for Review" OR status_label=="Closed: False Positive" OR status_label=="Pending" OR status_label=="Closed: Valid - Remediated" | stats earliest(owner) AS Analyst, earliest(search_name) AS "Alert Name", latest(status_label) AS Status, latest(comment) AS Summary
I am currently using a bar chart visualization but I need to sort the bars by descending order.  I can't use a simple  chart count by EVNTSEVCAT | sort -count  because the SEVCAT field contains mu... See more...
I am currently using a bar chart visualization but I need to sort the bars by descending order.  I can't use a simple  chart count by EVNTSEVCAT | sort -count  because the SEVCAT field contains multiple values and we only need I,II, and III. below is my query         search * | eval CATI = if(SEVCAT=="I", 1,0) | eval CATII = if(SEVCAT=="II", 1,0) | eval CATIII = if(SEVCAT=="III", 1,0) | chart sum(CATI) sum(CATII) sum(CATIII) | transpose           The visualization:   I need the visualization to be sorted in descending order. Any suggestions help :-).   Thank you, Marco
Have a installation issue.  I am trying to upgrade from Splunk 8.0.5 to 8.2.4.     Here are the errors Im receiving:   splunk-8.2.4-87e2dda940d1-linux-2.6-x86_64.rpm: Header V4 RSA/SHA512 Signatu... See more...
Have a installation issue.  I am trying to upgrade from Splunk 8.0.5 to 8.2.4.     Here are the errors Im receiving:   splunk-8.2.4-87e2dda940d1-linux-2.6-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID b3cd4420: NOKEY   ^Cerror: can't create transaction lock on /var/lib/rpm/.rpm.lock (Interrupted system call)   The installation just hang at this point, have to back out of it.  Ive stopped the splunk processes (splunkd)   Have any idea of whats going on?  
This is a log example:  2022-04-19 11:33:41 Local1.Info 10.0.6.1 Apr 19 12:34:20 FireboxM470_HA2 801002AA8CC3A FireboxM471_HA (2022-04-19T18:34:20) firewall: msg_id="3000-0151" Allow Firebox Extern... See more...
This is a log example:  2022-04-19 11:33:41 Local1.Info 10.0.6.1 Apr 19 12:34:20 FireboxM470_HA2 801002AA8CC3A FireboxM471_HA (2022-04-19T18:34:20) firewall: msg_id="3000-0151" Allow Firebox External-H udp 206.131.15.124 78.243.26.213 2267 53 geo_src="USA" geo_dst="USA" duration="36" sent_bytes="105" rcvd_bytes="121" (Any From Firebox-00) I need to extract the src_ip (206.131.15.124 ) and the dst_ip (78.243.26.213).  Splunk do not create a proper regex by itself, no matter how many examples I give. I am looking for a regex that matches the 2nd IP in the log, and another one for the 3rd one.  Till now, I have done this: "(\d{1,3}\.){3}\d{1,3}", wich matches the 3 IPs, but I don´t know how to select one of them. And this: "(tcp|udp)\s((\d{1,3}\.){3}\d{1,3})"  wich returns the second IP with the protocol, don't know how to remove the protocol and the space.  Does anyone knows how to extract those fields as new fields?
Below is my raw logs. I want to extract "analystVerdict" & its corresponding result from raw logs. can someone please help   \"mitigationStartedAt\": \"2022-04-13T03:57:58.393000Z\", \"status\"... See more...
Below is my raw logs. I want to extract "analystVerdict" & its corresponding result from raw logs. can someone please help   \"mitigationStartedAt\": \"2022-04-13T03:57:58.393000Z\", \"status\": \"success\"}], \"threatInfo\": {\"analystVerdict\": \"false_positive\", \"analystVerdictDescription\": \"False positive\", \"automaticallyResolved\": false, \"browserType\": null, \"certificateId\": \"\", \"classification\": \"Malware\",   I tried below. But i am failing to get the result index=test_summary  | rex field=_raw ":\\\"(?<analystVerdict>\w+)\\\"" |table search_name analystVerdict
Hi, I have an index with one field as a timestamp, "SESSION_TIME", and another field, "SEQUENCE". The "SEQUENCE" field is unique for each event and i am tasked to replace the seconds part of each t... See more...
Hi, I have an index with one field as a timestamp, "SESSION_TIME", and another field, "SEQUENCE". The "SEQUENCE" field is unique for each event and i am tasked to replace the seconds part of each timestamp with the respective "SEQUENCE" number. This is what I currently wrote but I clearly wrote it incorrect: eval xxx = strftime(SESSION_TIMESTAMP,"%S" = "SEQUENCE") Can you please help? Thanks, Patrick
Hello, I have a tricky question. I'm trying to count tickets by providers we have. I am using the parent and subtasks to check to which team we are sending a subtask + using the service to know t... See more...
Hello, I have a tricky question. I'm trying to count tickets by providers we have. I am using the parent and subtasks to check to which team we are sending a subtask + using the service to know the provider. I'm stuck in cases like these ones. 3 events which are - the parent task .1 with no to_team and no provider - subtask 1 with one to_team - subtask num 2 with a provider (different to the team above) Now I have the three of them counted as Provider1 (subtask num1), Provider2 (subtask number 2) and Other (parent task). However, what I need is to avoid counting the parent task if there's a subtask with the needed information.   There are some parent task with no information that have to be in "Other" section because they need to be counted, but just when there's no subtask attached. Is it possible? I have tried subsearches but I cannot achieve one that works.   Thank you in advance.
I have a multisite cluster with 3 sites . Which is having 6 indexers as peer nodes clustered across the 3-sites (2 indexers each) managed by a manager node . Also we have 2 SHs clusters across the 3-... See more...
I have a multisite cluster with 3 sites . Which is having 6 indexers as peer nodes clustered across the 3-sites (2 indexers each) managed by a manager node . Also we have 2 SHs clusters across the 3-sites .  SHcluster1 - total 9 SHs (we kept 3 SHs in each site)  SHcluster2- total 6 SHs (we kept 2 SHs in each site)  so wanted to understand how the configuration is going to be in deployer, each SHs ,manager node as this will be a multisite cluster . As per my knowledge for multisite cluster - for single SHs config is Configure the search heads ----------------------------------- sudo ./splunk edit cluster-config -mode searchhead -site site1 -manager_uri <URI>:<mngmtPort> -secret <secretkey> so likewise what will be the configuration for SH clusters in multisite .
Hello, Since some domain e-mail changes in the company, I ended up having different users in splunk.com (here in this community) Is there any way I can get rid of the old ones but having all the in... See more...
Hello, Since some domain e-mail changes in the company, I ended up having different users in splunk.com (here in this community) Is there any way I can get rid of the old ones but having all the information in the latest and newest created profile? i.e. mail1 user created in 2018 + mail1 user created last week, and I'm the same person for both profiles. The only change is the e-mail Thanks
  Hi, We have event with time field  Time=1650461136000 Props configuration parsing the time into  _time: 2022-04-20 16:25:36 _indextime: 04/20/2022 16:22:43 [props] TIME_PREFIX = ,\Time\=... See more...
  Hi, We have event with time field  Time=1650461136000 Props configuration parsing the time into  _time: 2022-04-20 16:25:36 _indextime: 04/20/2022 16:22:43 [props] TIME_PREFIX = ,\Time\= TIME_FORMAT = %s%3N That means the data ingest with future time. With that being said, what are we missing?  Why we still receive the warning   "WARN Date ParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (350) characters of event. Defaulting to timestamp of previous event" Thank you!  
Hello, The reason for my question is that I have installed the database agent, but on the Controller, I can't see queries and activities. When I take a look at the logs, I have the following: ... See more...
Hello, The reason for my question is that I have installed the database agent, but on the Controller, I can't see queries and activities. When I take a look at the logs, I have the following: and on controller UI I get this message: Thanks for your Help!
Hi Splunkers,  I'm facing the following task: I have to build a correlation search that check users that go on a web page without using proxy or, in other words, direct traffic that no pass throug ... See more...
Hi Splunkers,  I'm facing the following task: I have to build a correlation search that check users that go on a web page without using proxy or, in other words, direct traffic that no pass throug it.  The rule itself is not a problem; i could perform some checks, for example if the host is not a proxy. My question is: using Data Model Web, because one bound is to use DM if possible, how can I distinguish direct web traffic by proxy one? I mean, which field, or fields, am I supposed to check to identify direct traffic from proxy one using this DM? Is this action possible with Web DM?  
Hi all, new to splunk, we are regularly burning down our heavy forwarders and as such the IPs change regularly. I need a way to keep the UFs pointed at the HFs but ive read that using an AWS ELB isnt... See more...
Hi all, new to splunk, we are regularly burning down our heavy forwarders and as such the IPs change regularly. I need a way to keep the UFs pointed at the HFs but ive read that using an AWS ELB isnt recommended. To add to the challenge we have to keep everything encrypted over TLS. what is the recommended way to handle ips changing all the time when managing hundreds of UFs? how do people ensure that the UFs are always talking to the geographically nearest HFs? many thanks Oz
Hi  License got expired for all the nodes in application but still data reporting. I am able to see and drill down. How many days Can I see this data reporting after expiration? Regards, Hem... See more...
Hi  License got expired for all the nodes in application but still data reporting. I am able to see and drill down. How many days Can I see this data reporting after expiration? Regards, Hemanth Kumar.
Hello,  I'm dealing with an issue for one of my forwarders. I have13 forwarders that are all showing in my search app in the proper format. For some reason, the forwarder that also happens to be ou... See more...
Hello,  I'm dealing with an issue for one of my forwarders. I have13 forwarders that are all showing in my search app in the proper format. For some reason, the forwarder that also happens to be our primary domain controller is only using a source and sourcetype of xml files. Is there a way to troubleshoot this?
How to put query to trigger alert if user account has logged in during off business hours?
I want to get an API usage report per user and I am struggling with the Splunk Query for this, can someone please help with the query, I tried using rex but didn't get through.   In my app logs, I ... See more...
I want to get an API usage report per user and I am struggling with the Splunk Query for this, can someone please help with the query, I tried using rex but didn't get through.   In my app logs, I have a text like - U87XXXX:ddddffggggggsss.REG.Currency [RestInterceptor]: RestRequest: URI : https://abc.net/api/curr ........  RequestBody: {"loginId": "U87XXXX"}   I want the output as UserID                     URL                                                 COUNT  U87XXXX        https://abc.net/api/curr                    5 U78XXXX       https://abc.net/api/xyz                     11    Thanks in advance.
Hi. I have log with different messages. I want to understand which line appears the most times in the log. Please help me   Here you can see example to 4 lines from the log: '2022-04-14 05:... See more...
Hi. I have log with different messages. I want to understand which line appears the most times in the log. Please help me   Here you can see example to 4 lines from the log: '2022-04-14 05:11:53,833',SmartX.ControlUp.Client.CacheActivityListener,'[Connections#12]','DEBUG','[OnDBTransaction] IsEntityInBlackList: Entity= Processes blackList is empty.' '2022-04-14 05:11:53,833',SmartX.ControlUp.Client.AlertsFactory,'[Observables#18]','INFO','GetInvokedTrigger ShouldBeInvoked ==> Session, for trigger id = 3cb3a80e-0d64-4585-a255-9c554d534deb, trigger name = AAS_Session State - Active to Idle - BLK' '2022-04-14 05:11:53,848',SmartX.ControlUp.Client.AlertsFactory,'[Observables#18]','DEBUG','ExamineAdvancedTriggersInternal - ret is true, trigger was added, trigger id = 3cb3a80e-0d64-4585-a255-9c554d534deb, trigger name = AAS_Session State - Active to Idle - BLK' '2022-04-14 05:11:53,833',SmartX.ControlUp.Client.CacheActivityListener,'[Connections#12]','DEBUG','[OnDBTransaction] IsEntityInBlackList: Entity= Processes blackList is empty.' I want receive statistic data about each raw how many times it appears in the log. Of course in my log are much more than 4 different lines
Hello, How to integrate Appdynamics with Jaspersoft to create formatted reports (Dashboards and tables)? I used the following article as guideline https://community.appdynamics.com/t5/Knowledge-Bas... See more...
Hello, How to integrate Appdynamics with Jaspersoft to create formatted reports (Dashboards and tables)? I used the following article as guideline https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-retrieve-metrics-and-so-I-can-display-them-with/ta-p/39478 But it worked when I created a "Data Adapter" as "JSON URL/File" It worked in some table cases, but not all dashboards of Jaspersoft community edition. Also, It did not work on "Web Data Source" via Jaspersoft Pro Edition!
Hi, I need some help. We have been using Splunk for MongoDB alert for a while, now the new MongoDB version we are upgrading to is changing the log format from text to JSON. I need to alter the ale... See more...
Hi, I need some help. We have been using Splunk for MongoDB alert for a while, now the new MongoDB version we are upgrading to is changing the log format from text to JSON. I need to alter the alert in Splunk so that it will continue to work with the new JSON log format. Here is an example of a search query in one of the alert we have now: index=googlecloud* source="projects/dir1/dir2/mongodblogs" data.logName="projects/dir3/logs/mongodb" data.textPayload="* REPL *" NOT "catchup takeover" | rex field=data.textPayload "(?<sourceTimestamp>\d{4}-\d*-\d*T\d*:\d*:\d*.\d*)-\d*\s*(?<severity>\w*)\s*(?<component>\w*)\s*(?<context>\S*)\s*(?<message>.*)" | search component="REPL" message!="*took *ms" message!="warning: log line attempted * over max size*" NOT (severity="I" AND message="applied op: CRUD*" AND message!="*took *ms") | rename data.labels.compute.googleapis.com/resource_name as server | regex server="^preprod0[12]-.+-mongodb-server8*\d$" | sort sourceTimestamp data.insertId | table sourceTimestamp server severity component context message   The content of the MongoDB log is under data.TextPayload, currently is being formatted using regex and split into 5 groups with labels and then we search from each group for the string or message that we want to be alerted on. The new JSON format log looks like this: {"t":{"$date":"2022-04-19T07:50:31.005-04:00"},"s":"I", "c":"REPL", "id":21340, "ctx":"RstlKillOpThread","msg":"State transition ops metrics","attr":{"metrics":{"lastStateTransition":"stepDown","userOpsKilled":0,"userOpsRunning":4}}} I need to split them into 7 groups, using comma as delimiter and then search from each group using the same search criteria. I have been trying and testing for 2 days, I'm new to Splunk and not very good in regex. Any help would be appreciated. Thanks ! Sally