All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

event status : False positive (25 may) False positive (24 may) Investigating (23 may) Investigating (22 may) Service degradation (21 may) Hear status changed Service degradation to investigatin... See more...
event status : False positive (25 may) False positive (24 may) Investigating (23 may) Investigating (22 may) Service degradation (21 may) Hear status changed Service degradation to investigating then alert wants to raised then status changed investigating to investigating then alert is not raised. then status changed from investigating to false positive then alert wants raised. dash board query: index="mail_activity" sourcetype="service:message" DisplayName="Exchange Online" | eval myTimeNewEpoch=strptime(UpdatedTime,"%Y-%m-%dT%H:%M:%S") | eval UpdatedTime=strftime(myTimeNewEpoch,"%Y-%m-%d %H:%M:%S") | table LastUpdatedTime DisplayName Status Description | rename UpdatedTime as Time DisplayName as Application | sort -Time please help me with the query to create the alert Thanks in advance
Hi, I'm using the rest command to get a list of all knowledge objects: | rest /servicesNS/-/-/directory Is there an endpoint to add the info whether it is stored in local/default/both?
Hi, To Install UF into docker, I followed the below steps. 1) docker pull splunk/universalforwarder:latest 2) docker run -d -p 9997:9997 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PAS... See more...
Hi, To Install UF into docker, I followed the below steps. 1) docker pull splunk/universalforwarder:latest 2) docker run -d -p 9997:9997 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=" --name uf splunk/universalforwarder:latest 3) When I ran docker ps --> status is healthy 4) But I am not able to start/stop splunk --> In /opt/splunk/bin --> I am not finding any script for splunk start and stop.
itsi_tracked_alerts showing the correct time of events, however itsi_grouped_alerts showing event after 15-20 min. Which is resulting in a late view of alerts in Episode Review? index=itsi_grouped... See more...
itsi_tracked_alerts showing the correct time of events, however itsi_grouped_alerts showing event after 15-20 min. Which is resulting in a late view of alerts in Episode Review? index=itsi_grouped_alerts sourcetype="itsi_notable:group" Garbage Collection "f7a3cdb2c5a1bf1108305ea0" 5/28/20 9:16:38.000 AM { [-] ArchiveMon: NO ConfigurationItem: GOE Hybris Admin Europe 2 CustomUrl: http://monspkprdci05:8000/en-US/app/itsi/dynatrace_dashboard?form.kpi=*Garbage Collection*&form.service=hybadm&form.region=eu2 IsStartForAutomation: false SupportGroupName: GOE_AO_TA_Accenture aggregated: true alert_value: 2 automation: FALSE count: 2 index=itsi_grouped_alerts sourcetype="itsi_notable:group" Garbage Collection "f7a3cdb2c5a1bf1108305ea0" 5/28/20 9:04:17.769 AM { [-] ArchiveMon: NO ConfigurationItem: GOE Hybris Admin Europe 2 CustomUrl: http://monspkprdci05:8000/en-US/app/itsi/dynatrace_dashboard?form.kpi=*Garbage Collection*&form.service=hybadm&form.region=eu2 IsStartForAutomation: false SupportGroupName: GOE_AO_TA_Accenture aggregated: true alert_value: 1 automation: FALSE count: 2
What integrations are available (TA, REST, Syslog etc.) to monitor netbackup from Splunk
Hi, I would like to have a trellis donut chart in my dashboard, is it possible? If not, is there a workaround?
We are trying to zip and expand several levels of nested json data. Here is an example of our json data. Below is an example of the desired output. { "level0": { "globalname": "TO... See more...
We are trying to zip and expand several levels of nested json data. Here is an example of our json data. Below is an example of the desired output. { "level0": { "globalname": "TOP_A", "globalver": "1", "level1": { "level2": [ { "lvl2name": "LVL2A", "warnings": { "totalcount": "26", "rulebreakdown": [ { "rulecount": "2", "rulename": "ruleA" }, { "rulecount": "24", "rulename": "ruleB" } ] } }, { "lvl2name": "LVL2B", "warnings": { "totalcount": 81, "rulebreakdown": [ { "rulecount": "11", "rulename": "ruleG" }, { "rulecount": "67", "rulename": "ruleR" }, { "rulecount": "3", "rulename": "ruleZ" } ] } } ] } } } { "level0": { "globalname": "TOP_D", "globalver": "1.5", "level1": { "level2": [ { "lvl2name": "LVL6A", "warnings": { "totalcount": "2", "rulebreakdown": [ { "rulecount": "2", "rulename": "ruleAB" } ] } } { "lvl2name": "LVL6D", "warnings": { "totalcount": "23", "rulebreakdown": [ { "rulecount": "5", "rulename": "ruleGG" } { "rulecount": "14", "rulename": "ruleRG" } { "rulecount": "4", "rulename": "ruleGZ" } ] } } ] } } } This would be the desired output of these two events:
Hi! I did a search like this: | tstats summariesonly=t count from datamodel=XZY WHERE field_ip="192.168.101" OR field_ip="192.168.102" OR field_ip="192.168.103" OR field_ip="192.168.104" OR fi... See more...
Hi! I did a search like this: | tstats summariesonly=t count from datamodel=XZY WHERE field_ip="192.168.101" OR field_ip="192.168.102" OR field_ip="192.168.103" OR field_ip="192.168.104" OR field_ip="192.168.105" by field_ip, _time But this shows me just one line and concatenates the single field values (the different IPs) after another... so the first "quarter of the line is the first IP the next quarter is the next IP also. When I do the same search with the following: | datamodel XZY search | search field_ip="192.168.101" OR field_ip="192.168.102" OR field_ip="192.168.103" OR field_ip="192.168.104" OR field_ip="192.168.105" | timechart count by field_ip It does split the field_ip into its values and shows me 4 lines. One for each IP. Due to performance issues, I would like to use the tstats command. (I have the same issue when using the stats command instead of the timechart command) So I guess there is something like a parameter I must give the stats command to split the result in different lines instead of concatenating the results.
What are integrations available for VEEAM backup Monitoring. I see the VEEAM Backup Monitor App in splunk base been set to Archive. So any inputs would be much appreciated.
Hi i am having two search queries with a difference of only the time range. I want to show the results of both the queries in a single dashboard. I s there a way to do it
I am using Simple XML. I put 4 charts inside one Panel. Since I have other panels in the same row. I am struggling with the alignment. I tried adjusting the width in below CSS but it just reduced ... See more...
I am using Simple XML. I put 4 charts inside one Panel. Since I have other panels in the same row. I am struggling with the alignment. I tried adjusting the width in below CSS but it just reduced the size of the chart and didn't align the charts verctically. #reqPerChanChart { width:25% !important; } #reqPerClientChart { width:25% !important; } #resPerChanChart { width:25% !important; } #resPerClientChart { width:25% !important; }
Hi! I'm trying to see if I can get a JSON Payload like this: {"log":"2020-05-28 06:52:34,671 GMT TRACE [com.xxx.oss.core.servlets.TransactionFilter] (http-nio-8080-exec-7|R:lB6-JwrDGgR-ZvKy|Threa... See more...
Hi! I'm trying to see if I can get a JSON Payload like this: {"log":"2020-05-28 06:52:34,671 GMT TRACE [com.xxx.oss.core.servlets.TransactionFilter] (http-nio-8080-exec-7|R:lB6-JwrDGgR-ZvKy|ThreadId=55|ThreadPriority=5) Responding with outbound response: HTTP 200\\ncontent-length: 85\\ncontent-type: application/json\\n\\n{\"results\":true,\"internalTransactionId\":\"lB6-JwrDGgR-ZvKy\",\"executionTimeInMillis\":0}\n","container_name":"idm-geoservices","namespace_name":"dev","host":"server"} To replace where I have a literal "\n", and present it as a New Line and also represent the JSON payload in the pretty JSON format, if possible. I have tried something like: | eval log=replace(log,"\\\\n","[\n]") I can get the replacement to show when I do something like "TEST", but what I really want to do is have the "log" field present the \n in the message as a new line. Longer term, we're going to implement Splunk Connect for Kubernetes, but we're trying to get our user taken care of with being able to parse out a multi-line JSON message from Kubernetes. Thank you! Stephen
Hello! I have multiple questions around the topic "Alerts" in Splunk. Here is what i am trying to achieve.. I am trying to automate a couple of Macros to run one after the other. For example: 1... See more...
Hello! I have multiple questions around the topic "Alerts" in Splunk. Here is what i am trying to achieve.. I am trying to automate a couple of Macros to run one after the other. For example: 1)My first Macro runs to extract data for a period of 6 months from another index(lets call this Complete_Data_index) into my new index( lets call it Data_Teir1) 2)My second macro runs on Data_Teir1, by generating additional fields along with the original fields as part of the results and collects it into a new index called Data_Tier2. 3)My third macro runs on the index Data_Tier2, where again it generates additional fields along with the original fields and the fields generated by Data_Tier2 as part of the results and collects it into a new index called Data_Tier3. The requirement now is to generate logs that record if each macro run was successful,errorneous,partially successful etc. Basically to set up a logger to know what is happening at each stage of the Macro. 1)One of the questions I also had was with the feature "Trigger Conditions". If for some reason data was not collected onto Data_Tier1 from Complete_Data_index, and my "Trigger Condition" is set to Number of Results greater than 0.(refer screenshot). Will this trigger an alert to me indicating no data was collected? 2)Can all this be achieved just with Splunk or should I use Python to help me set up logging/loggers? Please help and suggest! Thanks in Advance!
Hi All, We have a requirement to add some custom filter in the notable events filter menu such as country. Can you please advise if this is possible and how ?
Does RBAC get all users REST API supports pagination? If yes is there any doc which i can refer to, and what is the maximum number of users returned in one api call? Thanks
I have a table: Month Transactions Mar 2000 April 3000 I want to display the difference of April - May and also the % reduction in Splunk. Is there a way to do that
I'm running a report which will trigger email with csv attachment. Here , I want to store all those csv files to a specific folder in server directly , without manual intervention, Found some opt... See more...
I'm running a report which will trigger email with csv attachment. Here , I want to store all those csv files to a specific folder in server directly , without manual intervention, Found some options in splunk like using outputcsv or outputlookup, But these are storing results in default directories. Struggling to find the best solution on this, Is any one can help on this.
I have below splunk events / search result:- message: host id :undefined, test Id :"42342424-8bf9-4abdc", msg : processing test data message: host id :undefined, test Id :"4eee2ab1-8bf9-4abdc", msg... See more...
I have below splunk events / search result:- message: host id :undefined, test Id :"42342424-8bf9-4abdc", msg : processing test data message: host id :undefined, test Id :"4eee2ab1-8bf9-4abdc", msg : data processing for test message: host id :undefined, test Id :"5eee2ab1-8bf9-43434", msg : data processing for test message: host id :undefined, test Id :"4234244-3339-4abdc", msg : processing test data message: host id :undefined, test Id :"4ujuj-8bf9-qwqweees", msg : data processing for test1 message: host id :undefined, test Id :"4tft-8bf9-hjhheeessss", msg : data processing for test1 extras-path: /v1/test-data/test-update I want to show the data in pie chart, so it should show 3 slice in 1 pie chart basically based on the msg part so 2 count for data processing for test and 2 count for data processing for test1 and 1 count for this path Actually i am not sure how to evaluate msg key and how to display 3 different result in 1 pie-chat . plz anyone can help.
Hello Guys, Sorry for blasting... When I input data into Splunk, I find some field values in the events are "None" or "Nan" or "". How can I delete these events which contain the blank values in Sp... See more...
Hello Guys, Sorry for blasting... When I input data into Splunk, I find some field values in the events are "None" or "Nan" or "". How can I delete these events which contain the blank values in Splunk? Or is there any way to drop these events when inputting these data?
Hi, I am using Phantom to solve login issue in Okta. If a user is facing login issue in Okta, then I want to create an event in phantom for that, forward it for next logical operation(like create tic... See more...
Hi, I am using Phantom to solve login issue in Okta. If a user is facing login issue in Okta, then I want to create an event in phantom for that, forward it for next logical operation(like create ticket). But I don't have any idea how to create event for this in phantom. Is there any to solve this problem? Thanks.