All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, I have upgraded my splunk standalone enterprise (indexer) from 6.4.1 to 7.0.0. i am not able to see data in my MC. Also in resource usage tab under Machine information the instance i see i... See more...
Hi Team, I have upgraded my splunk standalone enterprise (indexer) from 6.4.1 to 7.0.0. i am not able to see data in my MC. Also in resource usage tab under Machine information the instance i see is not same as i have in my general setup. Also the instance under machine information is not reachable and uri is blank. Could someone help me with thsi. Thanks.
Hello, I have a situation where I have an input that is losing some of the rows. The connection works stable. The input is executed every 5 minutes and defined with rising column on “LASTMODIFIED”... See more...
Hello, I have a situation where I have an input that is losing some of the rows. The connection works stable. The input is executed every 5 minutes and defined with rising column on “LASTMODIFIED” (DateTime) column with ">=?" to avoid lost rows, order ASC. Vast majority of rows is indexed correctly but few are not. Interestingly when I use SQL Explorer I retrieve the rows without any problem. I have created special input that was 100% the same as the original one + had “and ID=123” in SQL query limiting the ingest to the one of the interesting rows only. It worked like charm. Answering question before asked – I cannot use any different column to be rising one (hence “>=” not the typical “>” in SQL query). Moreover, I have checked and found out that the missing “123 row” has no twin row with the same “LAST MODIFIED” value. 1. I am looking for ways of troubleshooting it. I tried to increase logging to DEBUG but I drown in the amount of logs and was unable to find anything that would be helpful in the obvious way – I was hoping for results of each row parsing, no luck… 2. If anyone can suggest possible causes that are not covered by “>=” trick I would also truly appreciate. Thank you in advance
Hello All, I encountered a visualization issue in the Single Value tile on Splunk Enterprise 7 when the current value is greater than zero and the compared old value is zero: the displayed percent... See more...
Hello All, I encountered a visualization issue in the Single Value tile on Splunk Enterprise 7 when the current value is greater than zero and the compared old value is zero: the displayed percentage change is displayed as "N/A" due to the division by 0 in the percentage ratio calculation (x/0 * 100). Even if the result is mathematically correct, displaying "N/A" makes the user think that something is not working as expected in Splunk or that some data are corrupted. The only workaround I found till now is replacing the 0 values with a little higher value between 0 and 0.1 (e.g. 0.001 or 0.999), but the displayed percentage is obviously ugly (e.g. 500000 for current value 5, when using 0,001 instead of 0). The query is just a count of some events, so, after the base search, it ends with "| timechart span=1h count by key". Have anyone a solution or a better workaround? Thanks, G.P.
Hi There, I have got a live feed from DBConnect for incidents data in below format: dateRaised, IncID, Location, Status, closedDate 05-05-20, 12345, ABC, Closed, 06-05-20 05-05-20, 23456, D... See more...
Hi There, I have got a live feed from DBConnect for incidents data in below format: dateRaised, IncID, Location, Status, closedDate 05-05-20, 12345, ABC, Closed, 06-05-20 05-05-20, 23456, DEF, In Progress, 06-05-20, 34567, GHI, Closed, 06-05-20 24-04-20, 45678, JKL, On Hold, 28-04-20, 56789, MNO, Closed, 06-05-20 Here, column closedDate is NULL for all the incidents which are not yet closed. And, _time is set to dateRaised. I want to create a rolling report of daily raised and closed incident count for the last 30 days with below details: Date RaisedCount ClosedCount I have tried solutions given for some of the similar questions on this site but to no joy. I want to write a search which I can run with time picker set as yesterday and save as a scheduled report. Now, this report runs daily and store data in a summary index. Using this summary index, I can get Date, RaisedCount, ClosedCount One of the thing I have tried is overriding _time with column closedDate in sub-search (Time picker is set to "Yesterday") This is giving me only those Closed Ticket which were raised yesterday. I want to include even those tickets which were raised even in last month but if Closed yesterday. index="idx" source="incident_data_source" sourcetype="incident_live_feed" | eval _time=strptime(dateRaised, "%Y-%m-%d %H:%M:%S.%N") | stats dc(dateRaised) as RaisedCount | table RaisedCount | appendcols [search source="Inc_Open_Closed_Test.csv" host="xyz" sourcetype="csv" | eval _time=strptime(closedDate, "%Y-%m-%d %H:%M:%S.%N") | stats dc(closedDate) as ClosedCount | table ClosedCount] | table RaisedCount ClosedCount Any help would be greatly appreciated. Thank you. Madhav
Hello, I'm starting a learning path on ITSI. Getting a fast look on documentation I see that latest version (4.5.0) is cloud only. Should I start my learn/study from the latest version because sin... See more...
Hello, I'm starting a learning path on ITSI. Getting a fast look on documentation I see that latest version (4.5.0) is cloud only. Should I start my learn/study from the latest version because since now will be cloud based only or the previous version still exist with a parallel rollout? thanks
Hi We need to schedule the billing dashboard to be exported to pdf and send email Daily. Please help us on this.
Hello all, I have a report that searches for differents time range like Year to now, Month to now, Last 5 days and last 24 hrs. The current search is: search index | stats sum(FAIL) as "Fai... See more...
Hello all, I have a report that searches for differents time range like Year to now, Month to now, Last 5 days and last 24 hrs. The current search is: search index | stats sum(FAIL) as "Failures", sum(PASS) as "Passed", sum(TOTAL) as "Total" | eval "Date" = "Last 24 hrs" | append [ search "same search index" earliest=-5d@d latest=now() | stats sum(FAIL) as "Failures", sum(PASS) as "Passed", sum(TOTAL) as "Total" | eval "Date" = "Last 5 days"] | append [ search "same search index" earliest=-0mon@mon latest=now() | stats sum(FAIL) as "Failures", sum(PASS) as "Passed", sum(TOTAL) as "Total" | eval "Date" = "Month to now"] | append [ search "same search index" earliest=-0year@year latest=now() | stats sum(FAIL) as "Failures", sum(PASS) as "Passed", sum(TOTAL) as "Total", earliest(_time) as "Date" | convert timeformat="%m/%d/%Y" ctime("Date") as "Date" | eval "Date" = "Year to now - ".'Date'] | table "Date", "Failures", "Passed", "Total" The final result is a table with 4 rows in it. Containing Last 24 hours, Last 5 days, Month to now and Year to now results. So the problem that I'm currently facing is sometimes the Year to now results, returns a different Date like 01/27/2020. It's supposed to be the first day of the current year. The Job Inspect tells me that the append command is consuming a lot of time to be completed. Is it possible to do the longest search (Year to now) and have multiples stats commands by different time range to get the final result or a way to improve this search? Thanks for all your help.
I was given below excel of rules FT OOS ErrorCode Priority Min_req Max_Req Min_Req_Fail Max_Req_Fail Failure% watch ALCATEL_FT ALCATEL ALLERRORS 1 2... See more...
I was given below excel of rules FT OOS ErrorCode Priority Min_req Max_Req Min_Req_Fail Max_Req_Fail Failure% watch ALCATEL_FT ALCATEL ALLERRORS 1 20 300 20 40 0 3 ALCATEL_FT ALCATEL ALLERRORS 2 20 300 41 75 0 2 ALCATEL_FT ALCATEL ALLERRORS 3 20 300 76 0 0 1 ALCATEL_FT ALCATEL ALLERRORS 4 20 0 301 0 0 1 ALCATEL_FT ALCATEL ALLERRORS 5 20 0 0 0 100 2 GWR_FT GWRNG_NY ALLERRORS 1 20 300 20 60 0 3 GWR_FT GWRNG_NY ALLERRORS 2 20 300 61 100 0 2 GWR_FT GWRNG_NY ALLERRORS 3 20 300 101 0 0 1 GWR_FT GWRNG_NY ALLERRORS 4 20 0 201 0 0 1 GWR_FT GWRNG_NY ALLERRORS 5 20 0 0 0 90 2 I have the log file in splunk saved into index dte4fios and I have the fields FT,Error_Code & OSS which are same as above lookup file columns FT,ErrorCode & OSS. I was given below requirement to create a very dynamic alert by using the rules in the above lookup file. 1. My Alert should run every 15 mins (lete us assume alert is running at 11AM)and check for if there is any any FT & Error Code combination which has satisfied above rules and if yes then send an alert with output of some other query. 2. Let me explain the understanding of rules in above lookup file for one of the FT ALCATEL_FT in this case. Rules should be validated in order of Prioirty 5 to Proirity 1 Prioirty 5 Rule: Min_req - If in last 15 mins if this FT has got min hits of 20 Max_Req - Max can be anything (0 to denote as max can be anything) Failure% - If Failure% is 100 watch - 2 means that I need to check whether in my previous 15min interval (i.e. 10:30-10:45) also this FT is satisfying the Rule 5 or not. If yes then only send the alert otherwise not. Prioirty 4 Rule: Min_req - If in last 15 mins FT has got min hits of 20 Max_Req - Max can be anything (0 to denote as max can be anything) Min_Req_Fail - If Min_Req_Fail is >=301 watch - 1 means that I need not check my prior 15min interval and just send the alert Prioirty 3 Rule: Min_req - If in last 15 mins FT has got min hits of 20 Max_Req - If in last 15 mins FT has got max hits of <=300 Min_Req_Fail - If Min_Req_Fail is >=76 watch - 1 means that I need not check my prior 15min interval and just send the alert Prioirty 2 Rule: Min_req - If in last 15 mins FT has got min hits of 20 Max_Req - If in last 15 mins FT has got max hits of <=300 Min_Req_Fail - If Min_Req_Fail is >=41 Min_Req_Fail - If Min_Req_Fail is <=75 watch - 2 means that I need to check whether in my previous 15min interval (i.e. 10:30-10:45) also this FT is satisfying the Rule 2 or not. If yes then only send the alert otherwise not. Prioirty 1 Rule: Min_req - If in last 15 mins FT has got min hits of 20 Max_Req - If in last 15 mins FT has got max hits of <=300 Min_Req_Fail - If Min_Req_Fail is >=20 Min_Req_Fail - If Min_Req_Fail is <=40 watch - 3 means that I need to check whether in my previous 2 15min interval (i.e. 10:30-10:45, 10:15-10:30) also this FT is satisfying the Rule 1 or not. If yes then only send the alert otherwise not. This is how I need to comapre my log with the rules in the lookup table and generate an alert. Above lookup table is a sample one but it has more rows with diff FT, Error_Code & rules. I am pretty new to splunk and will be a real help if someone can guide me in writing query to achieve this for 1 FT so that I can simulate for entire lookup table. index=dte_fios sourcetype=dte2_Fios FT=*FT earliest=04/20/2020:11:00:00 latest=04/20/2020:13:00:00 | stats count as Total, count(eval(Error_Code!="0000")) AS Failure by FT | eval Failurepercent=round(Failure/Total*100) | table FT, Total,Failure,Failurepercent I need help with how to get the rows for every 15min interval for each FT & ErroCode and then verify the rules
Hello guys, I am trying to automate the communication between Splunk ES and phantom by adding "Run playbook in phantom" to the correlation search adaptive response actions. I've noticed that when ... See more...
Hello guys, I am trying to automate the communication between Splunk ES and phantom by adding "Run playbook in phantom" to the correlation search adaptive response actions. I've noticed that when the action is automated, very few fields are sent to the phantom container; whereas when running the Adaptive response manually, all the fields present in the notable event are being sent to phantom container correctly. Does anyone have any idea what could be the issue? Is race condition an option? Thank you
| rex field=_raw max_match=0 "BodyOftheMail_Script\s=\s\[\sBEGIN\s{0,}(?<BodyOftheMail>.((.|\n)*?)(?=\s{1,}END\s\]))" I am trying to read body of the mail from logs ( some of the them are more ... See more...
| rex field=_raw max_match=0 "BodyOftheMail_Script\s=\s\[\sBEGIN\s{0,}(?<BodyOftheMail>.((.|\n)*?)(?=\s{1,}END\s\]))" I am trying to read body of the mail from logs ( some of the them are more than 500 lines). I donot want to increase the value in limits.conf . Is my rex correct? Kindly help. BodyOftheMail_Script = [ BEGIN 500 lines END ]
Hello Experts, I have attached two images. The following search returns results as shown in "CURRENT" image. But i would like to have results as per "DESIRED" image. Could you please help. ... See more...
Hello Experts, I have attached two images. The following search returns results as shown in "CURRENT" image. But i would like to have results as per "DESIRED" image. Could you please help. base search from JSON.. | stats values(date1) by hostname | rename values(date1) AS date1 | stats list(hostname) AS hostname by date1 | xyseries hostname date1 date1 Thank you.
Hi There, I have got incidents data in below format: dateRaised, IncID, Location, Status, closedDate 05-05-20, 12345, ABC, Closed, 06-05-20 05-05-20, 23456, DEF, In Progress, 06-05-20, 3... See more...
Hi There, I have got incidents data in below format: dateRaised, IncID, Location, Status, closedDate 05-05-20, 12345, ABC, Closed, 06-05-20 05-05-20, 23456, DEF, In Progress, 06-05-20, 34567, GHI, Closed, 06-05-20 24-04-20, 45678, JKL, On Hold, 28-04-20, 56789, MNO, Closed, 06-05-20 Here, column closedDate is NULL for all the incidents which are not yet closed. And, _time is set to dateRaised. I want to calculate raised and closed incident counts for a particular time period (say, for 06-05-20) and expected results in below tabular format: Date RaisedCount ClosedCount 06-05-20 1 3 I have tried solutions given for some of the similar questions on this site but to no joy. I have tried below solutions with selecting date range between 06-05-20 00:00:00 and 06-05-20 24:00:00 in time picker. Trial 1: source="Inc_Open_Closed_Test.csv" host="xyz" sourcetype="csv" | eval _time=strptime(dateRaised, "%Y-%m-%d %H:%M:%S.%N") | stats dc(dateRaised) as RaisedCount | table RaisedCount | appendcols [search source="Inc_Open_Closed_Test.csv" host="xyz" sourcetype="csv" | eval _time=strptime(closedDate, "%Y-%m-%d %H:%M:%S.%N") | stats dc(closedDate) as ClosedCount | table ClosedCount] | table RaisedCount ClosedCount Trial 2: source="Inc_Open_Closed_Test.csv" host="xyz" sourcetype="csv" | eval dateRaised=strptime(dateRaised, "%Y-%m-%d %H:%M:%S") | eval closedDate=strptime(closedDate, "%Y-%m-%d %H:%M:%S") | eval status=mvappend("opened","closed") | mvexpand status | eval _time=case(status="opened", dateRaised, status="closed", closedDate) | timechart span=1d count by status Any help would be greatly appreciated. Thank you. Madhav
I have created an app with for running powershell script which gives output as below- @{Date=05/08/2020; ARRAY=Server1; LATEST SNAPSHOT(In Days)=1; LATEST SNAPSHOT DATE=05/07/2020; OLDEST SNAPSH... See more...
I have created an app with for running powershell script which gives output as below- @{Date=05/08/2020; ARRAY=Server1; LATEST SNAPSHOT(In Days)=1; LATEST SNAPSHOT DATE=05/07/2020; OLDEST SNAPSHOT(In Days)=62} @{Date=05/08/2020; ARRAY=Server2; LATEST SNAPSHOT(In Days)=1; LATEST SNAPSHOT DATE=05/07/2020; OLDEST SNAPSHOT(In Days)=62} @{Date=05/08/2020; ARRAY=Server3; LATEST SNAPSHOT(In Days)=1; LATEST SNAPSHOT DATE=05/07/2020; OLDEST SNAPSHOT(In Days)=62} @{Date=05/08/2020; ARRAY=Server3; LATEST SNAPSHOT(In Days)=0; LATEST SNAPSHOT DATE=05/08/2020; OLDEST SNAPSHOT(In Days)=112} These events are coming as a single event, I need every line as a separate event for which I tried giving props.conf for my sourcetype as [my_sourcetype] LINE_BREAKER =([\r\n]+) SHOULD_LINEMERGE = false But after this also,I am not able to get separate events ? Any suggestion over this ?
Hi everyone, I recently started a project in JavaScript with Vue.js framework. It's a simple setup with three components and a router. I'm trying to use the Splunk JavaScript SDK with the Vue.js p... See more...
Hi everyone, I recently started a project in JavaScript with Vue.js framework. It's a simple setup with three components and a router. I'm trying to use the Splunk JavaScript SDK with the Vue.js project, but it's not working. It worked from a simple HTML where the JavaScript was directly embedding, but the Vue.js framework doesn't allow me to do to so. I managed to install the SDK in the project as a dependency and it's visible in the node_modules list, but when I try to include it in a component, everything breaks. Can someone share some good practise for using the SDK with Vue.js or a similar framework? Thanks.
got this error on the search head, Please help us to resolve this .Thanks Search peer xxxxxx has the following message: The metric value=0.00003393234971117585 provided for source=/opt/... See more...
got this error on the search head, Please help us to resolve this .Thanks Search peer xxxxxx has the following message: The metric value=0.00003393234971117585 provided for source=/opt/splunkforwarder/var/log/splunk/metrics.log, sourcetype=splunk_metrics_log, host=xxxxx, index=_metrics is not a floating point value. Using a "numeric" type rather than a "string" type is recommended to avoid indexing inefficiencies. Ensure the metric value is provided as a floating point number and not as a string. For instance, provide 123.001 rather than "123.001"
Hi Splunkers, I found this https://splunkbase.splunk.com/app/3120/ visualization useful which is built by Splunk but the problem is it comes as a separate app. I want to include it in the app bund... See more...
Hi Splunkers, I found this https://splunkbase.splunk.com/app/3120/ visualization useful which is built by Splunk but the problem is it comes as a separate app. I want to include it in the app bundle itself. I checked the license and its MIT. I am not sure if we can just copy it in the app bundle itself or not. If we can then what will be the steps to do so. Thanks, Harsh
Hello, Attached here the list of roles we have. But my regular expression is showing results of only RSI - VPN Users but not all the other roles. rex "^[^\)\n]*\)\[(?P\w+\s+\-\s+\w+\s+\w... See more...
Hello, Attached here the list of roles we have. But my regular expression is showing results of only RSI - VPN Users but not all the other roles. rex "^[^\)\n]*\)\[(?P\w+\s+\-\s+\w+\s+\w+)]" Can you please help me here? Entire Query: index=juniperindex | rex "(?P\w+\s+\d+)\s+(?P\d+:\d+:\d+)\s?+(?P\d+\.\d+\.\d+\.\d+)\s+(?P\d+-\d+-\d+T\d+:\d+:\d+-\d+:\d+)\s+(?P[[:graph:]]+)\s+\w+:\s+\d+-\d+-\d+\s+\d+:\d+:\d+\s+-\s+\w++\s+-\s+\[(?P\d+\.\d+\.\d+\.\d+)\]\s+(?P\w+)\((?P[[:graph:]]+)\)\[\]\s+-\s+(?P.+)" | rex "^[^\)\n]*\)\[(?P\w+\s+\-\s+\w+\s+\w+)" | rex "^(?:[^'\n]*'){7}(?P\w+)]" | rex "host\s+\'(?P[[:graph:]]+)\'" | rex "address\s+\'(?P[[:graph:]]+)\'" | rex "for\s+user\s+\'(?P[[:alnum:]]+)\'" | rex "reason\s+\'(?P[[:print:]]+)\'" | rex "^(?:[^'\n]*'){2}\s+(?P\w+)" | search status=failed OR status=passed | replace "passed" with successful in status | dedup user_name | table _time IP MAC user_name status user_group
Greetings! Why users using outside VPN access Can't access splunk Web GUI? Where to check and how to fix this issue? Thanks in advance
Smartvision is a new feature in FireEye and it generates alerts to identify lateral attacks. I see other alerts going to splunk but smartvision alerts aren't. Can someone please help me here?
I have two rows having follwing values: Name Text Count A ABC 1 A EFG 1 I want that my result should be displayed in single row showing count as 2 and both the text for a common na... See more...
I have two rows having follwing values: Name Text Count A ABC 1 A EFG 1 I want that my result should be displayed in single row showing count as 2 and both the text for a common name = A. Is there a way we can acheive this