All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have 3 values 15,26,18. Now assume my 18 is my latest value and i want to find p25 and p75 including the latest value by using all 3 values. If done i should one statistics with 2 column. ... See more...
Hello, I have 3 values 15,26,18. Now assume my 18 is my latest value and i want to find p25 and p75 including the latest value by using all 3 values. If done i should one statistics with 2 column. I tried to it by using stats and below is a picture. Now i verified the same via internet and found the values as below.   Now i dont understaad why dont i get the same value while using streamstats, I use window=3 which shud consider all 3 value and since its streaming, shouldn't it just look at previous 3 values(current=t) and find p25 and p75? Then why am i getting different values as below? Just look at the last row values.  
Hello Everyone, I have an environment which has an index cluster and three search heads that are currently looking for data in this cluster.  I want to create a SH cluster with this three search he... See more...
Hello Everyone, I have an environment which has an index cluster and three search heads that are currently looking for data in this cluster.  I want to create a SH cluster with this three search heads, but the hardware specifications between  them are different: -SH1 40 Cores 128GB Ram, (Chosen as captain) -SH2 24 Cores 64GB Ram, (Member) -SH3 24 Cores 64GB Ram, (Member) The Splunk documentation specifies that "Use identical specifications for all members (bare metal or VM)"  What would be the impact or implications to deploy a search cluster with this servers different  in Hardware Specifications?  The captain will use only just 24 cores and  64gb ram as the other cluster members?  Or the captain will assume every server has the same hardware capabilities as him?  As the following text suggest:  "Splunk recommends that you use homogeneous machines with identical hardware specifications for all cluster members. The reason is that the  cluster captain assigns scheduled jobs to members based on their current job loads. When it does this, it does not have insight into the actual processing power of each member's machine. Instead, it assumes that each machine is provisioned equally." I will appreciate your knowledge, thoughts and recommendations.  Thanks in advance.   
I just setup dbConnect 3.4.0 and setup my mysql connection. The logs are throwing this error     2020-09-24 18:37:44.357 +0000 [QuartzScheduler_Worker-14] ERROR org.easybatch.core.job.BatchJob - U... See more...
I just setup dbConnect 3.4.0 and setup my mysql connection. The logs are throwing this error     2020-09-24 18:37:44.357 +0000 [QuartzScheduler_Worker-14] ERROR org.easybatch.core.job.BatchJob - Unable to write records java.io.IOException: Failed to post to https://127.0.0.1:8088/services/collector/event, HTTP Error 403, HEC response body: {"text":"Invalid token","code":4}, trace: HttpResponseProxy{HTTP/1.1 403 Forbidden [Date: Thu, 24 Sep 2020 18:37:44 GMT, Content-Type: application/json; charset=UTF-8, X-Content-Type-Options: nosniff, Content-Length: 33, Vary: Authorization, Connection: Keep-Alive, X-Frame-Options: SAMEORIGIN, Server: Splunkd] ResponseEntityProxy{[Content-Type: application/json; charset=UTF-8,Content-Length: 33,Chunked: false]}}      I checked the HEC settings and there is a db-connect-http-input with a valid token, that same token also exists in /etc/apps/splunk_app_db_connect/local/inputs.conf Whats the issue here???
Hi,  I am new to splunk. I am trying to make my logging message format good.  I have log message with newline or carriage return not sure which one but when I try to replace it using  rex field=me... See more...
Hi,  I am new to splunk. I am trying to make my logging message format good.  I have log message with newline or carriage return not sure which one but when I try to replace it using  rex field=message mode=sed "s/^[\r\n]+//g" it does not work, Any suggestions? I am not sure if there are any spaces or white spaces but I also tried with s/^/S*[\r\n]+//g   Message: Line1 Line2 Line3 Expected : Line1 Line2 Line3
Hi Folks, Is there a way we can have same timezone for all the alerts, reports, dashboards that we created in Splunk? For instance: We have an admin user created in GMT zone, all the reports, aler... See more...
Hi Folks, Is there a way we can have same timezone for all the alerts, reports, dashboards that we created in Splunk? For instance: We have an admin user created in GMT zone, all the reports, alerts and dashboards have been created using this user in GMT timezone. When any other user runs or modifies the reports, alerts or dashboards, it gets changed to their timezone preference. Can't we keep a fixed timezone for all the reports and alerts?   Regards, Manish  
Guys, i need to create a table with 3 columns that shows me the total of produtcs per week.  Like:  Produtcs      TotalCountOneWeekAgo      TotalCountTwoWeeksAgo        TotalCountThreeWeeksAgo      ... See more...
Guys, i need to create a table with 3 columns that shows me the total of produtcs per week.  Like:  Produtcs      TotalCountOneWeekAgo      TotalCountTwoWeeksAgo        TotalCountThreeWeeksAgo       A                                 10                                                            16                                                           6 B                                 15                                                            8                                                          10 C                                  20                                                          14                                                         12 How can I create this three columns?
Looking for a way to monitor sniffing ports on a sensor.  Each port is tied to a different part of the system and would like a dashboard notification UP/DOWN if traffic has not been rcvd say in 1-5 m... See more...
Looking for a way to monitor sniffing ports on a sensor.  Each port is tied to a different part of the system and would like a dashboard notification UP/DOWN if traffic has not been rcvd say in 1-5 minutes.   i thought about doing IFCONFIG, but I think I made it to complicated by using grep to narrow down just the RX/TX information.    ifconfig | egrep -e ^eth | grep -v "inet\|UP"  which does a pretty good job for CLI output.    I tried installing the App *Nix but cant really get it to work.   I am using a Splunk server, and the "sensors" are a dffierant server which has the UFW.     Thanks  
We are investigating the best practice to roll out Splunk Cloud instances. Traditional wisdom to avoid noisy neighbor, we should separate production from non-production. Further to separate by BU to ... See more...
We are investigating the best practice to roll out Splunk Cloud instances. Traditional wisdom to avoid noisy neighbor, we should separate production from non-production. Further to separate by BU to avoid adverse impact. Does Splunk cloud workload rule provide adequate isolation to live happily together? From pricing perspective, does 5 one TB daily ingestion instances cost more than 1 five TB daily ingestion instance?        
I have 1600+ storage arrays and they are from multiple vendors, each with different thin provisioning levels. I currently have two columns one called TP at 1.2 and one called TP at 1.5. I'd like to c... See more...
I have 1600+ storage arrays and they are from multiple vendors, each with different thin provisioning levels. I currently have two columns one called TP at 1.2 and one called TP at 1.5. I'd like to combine them into a single column. I tried an if statement, but I couldn't get it right, I'm thinking I need to use a case statement but I'm not sure.  Here is an example eval "Thin Prov"=Case(((SV='vendorA' AND SM='MODEL1' OR SM='Model2'),(TC*1.5), (SV='vendorb' AND SM='Model3' or SM='Model4',TC*1.2))
My Splunk Infrastructure is ALL Windows.  And every Server/Workstation is windows with the exception of one Server which is Linux.  I was able to deploy the universal forwarder to the Linux Server an... See more...
My Splunk Infrastructure is ALL Windows.  And every Server/Workstation is windows with the exception of one Server which is Linux.  I was able to deploy the universal forwarder to the Linux Server and I have began receiving data at my Splunk Indexer.  My question is:   Is there an app similar to the App for Windows Infrastructure with built in dashboards for Linux that I can install on my Indexer/Search head (Which is windows)?   Thanks!
Hi, We would like to generate a JavaCore which is similar to Thread Dumps on our WebSphere Application Servers (v8.5.5) running with the IBM JVM. I have implemented the available action of a Thread ... See more...
Hi, We would like to generate a JavaCore which is similar to Thread Dumps on our WebSphere Application Servers (v8.5.5) running with the IBM JVM. I have implemented the available action of a Thread Dump to be fired if any hung threads are detected. However, the thread dump that is being captured does not have the complete information that we're looking for ( Stack Trace and full JVM related information) The usual way the admins generate this information is by running the kill -3 <PID> command against the JVM process ID. Is it possible to implement the same using the AppDynamics Thread Dump action? Kind Regards, Ashley Lewis
JAVA App Agent is showing in Tier and Node but in business transaction or anywhere I can't see any metrics. In app agent logs I can see following message. [AD Thread Pool-Global283] 23 Sep 2020 16:2... See more...
JAVA App Agent is showing in Tier and Node but in business transaction or anywhere I can't see any metrics. In app agent logs I can see following message. [AD Thread Pool-Global283] 23 Sep 2020 16:24:38,705 ERROR NetVizAgentRequest - F atal transport error while connecting to URL [http://127.0.0.1:3892/api/agentinf o?timestamp=0&agentType=APP_AGENT&agentVersion=0.5.0]: org.apache.http.conn.Http HostConnectException: Connect to 127.0.0.1:3892 [/127.0.0.1] failed: Connection refused (Connection refused) [AD Thread Pool-Global281] 23 Sep 2020 16:24:38,811  INFO XMLConfigManager - Min imal certificate chain validation performed [AD Thread Pool-Global281] 23 Sep 2020 16:24:38,926  WARN NetVizConfigurationCha nnel - NetViz: Number of communication failures with netviz agent exceeded maxim um allowed [3]. Disabling config requests. [AD Thread-Metric Reporter0] 23 Sep 2020 16:25:00,641  INFO XMLConfigManager - M inimal certificate chain validation performed [AD Thread Pool-Global283] 23 Sep 2020 16:25:17,452  INFO XMLConfigManager - Min imal certificate chain validation performed [AD Thread Pool-Global283] 23 Sep 2020 16:25:27,092  INFO XMLConfigManager - Min imal certificate chain validation performed [AD Thread Pool-Global283] 23 Sep 2020 16:25:38,927 ERROR NetVizAgentRequest - F atal transport error while connecting to URL [http://127.0.0.1:3892/api/agentinf o?timestamp=0&agentType=APP_AGENT&agentVersion=0.5.0]: org.apache.http.conn.Http HostConnectException: Connect to 127.0.0.1:3892 [/127.0.0.1] failed: Connection refused (Connection refused)
Guys, I need to create a table where I have the total of products from each week.  Like  Products     Total count from week1    Total count from week2       Total count from week3 A                ... See more...
Guys, I need to create a table where I have the total of products from each week.  Like  Products     Total count from week1    Total count from week2       Total count from week3 A                                 10                                                     5                                               7 B                                  15                                                    6                                              13 C                                   20                                                  10                                            21
Hi everybody, I have a Splunk deployment with 2 IDX, 1 HF and 2 SH all running on Windows Server. All the Splunk instance are 7.3.6. As per subject, I got a very strange issue when trying to config... See more...
Hi everybody, I have a Splunk deployment with 2 IDX, 1 HF and 2 SH all running on Windows Server. All the Splunk instance are 7.3.6. As per subject, I got a very strange issue when trying to configure the MS Office 365 Add-On (version 2.0.2) on the Heavy Forwarder. On the other hand, when I tried to configure it on a Search Head, everything worked fine and the Add-On is still running properly on such instance since I'm not able to solve the HF issue. SH and HF were in the same subnet when the issue happened (now the SH has been moved into another one but the issue showed up for the first time when they were in the same subnet). Here the details of the issue: when just clicking on the "Settings" tab of the application (no settings yet configured) I got a this error message in a red frame on the top of the page:   <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <!-- FileName: index.html Language: [en] --> <!--Head--> <head> <meta content="text/html; charset=UTF-8" http-equiv="Content-Type"> <meta http-equiv="X-UA-Compatible" content="IE=7" /> <title>McAfee Web Gateway - Notification</title> <script src="/mwg-internal/de5fs23hu73ds/files/javascript/sw.js" type="text/javascript" ></script> <link rel="stylesheet" href="/mwg-internal/de5fs23hu73ds/files/default/stylesheet.css" /> </head> <!--/Head--> <!--Body--> <body onload="swOnLoad();"> <table class='bodyTable'> <tr> <td class='bodyData' background='/mwg-internal/de5fs23hu73ds/files/default/img/bg_body.gif'> <!--Logo--> <table class='logoTable'> <tr> <td class='logoData'> <a href='http://www.mcafee.com'> <img src='/mwg-internal/de5fs23hu73ds/files/default/img/logo_mwg.png'></a> </td> </tr> </table> <!--/Logo--> <!--Contents--> <!-- FileName: cannotconnect.html Language: [en] --> <!--Title--> <table class='titleTable' background='/mwg-internal/de5fs23hu73ds/files/default/img/bg_navbar.jpg'> <tr> <td class='titleData'> Cannot Connect </td> </tr> </table> <!--/Title--> <!--Content--> <table class="contentTable"> <tr> <td class="contentData"> The proxy could not connect to the destination in time. </td> </tr> </table> <!--/Content--> <!--Info--> <table class="infoTable"> <tr> <td class="infoData"> <b>URL: </b><script type="text/javascript">break_line("https://127.0.0.1:8089/servicesNS/nobody/splunk_ta_o365/configs/conf-splunk_ta_o365_settings/proxy?output_mode=json&amp;count=0");</script><br /> </td> </tr> </table> <!--/Info--> <!--/Contents--> <!--Policy--> <table class='policyTable'> <tr> <td class='policyHeading'> <hr> Company Acceptable Use Policy </td> </tr> <tr> <td class='policyData'> This is an optional acceptable use disclaimer that appears on every page. You may change the wording or remove this section entirely in index.html. </td> </tr> </table> <!--/Policy--> <!--Foot--> <table class='footTable'> <tr> <td class='helpDeskData' background='/mwg-internal/de5fs23hu73ds/files/default/img/bg_navbar.jpg'> For assistance, please contact your system administrator. </td> </tr> <tr> <td class='footData'> generated <span id="time">2020-09-24 16:21:46</span> by McAfee Web Gateway <br /> python-requests/2.21.0 </td> </tr> </table> <!--/Foot--> </td> </tr> </table> </body> <!--/Body--> </html>   This is just the page generated (but not rendered) by the McAfee Web Gateway, and it causes that the application is not able to read the "splunk_ta_o365_settings.conf" file.  It seems that the URL causing the web gateway error is:     https://127.0.0.1:8089/servicesNS/nobody/splunk_ta_o365/configs/conf-splunk_ta_o365_settings/proxy?output_mode=json&amp;count=0   But if I type the URL in the search bar of my browser I got the requested JSON without any problem. Both SH and HF are under the same Web Gateway proxy configuration/policy. Any idea about this? Did anyone experience the same issue?  Thanks in advance
Hello, I am interested in making the results of one index search (in particular the values of fields early and late) used in a different index search as values assigned to earliest latest.  index="a... See more...
Hello, I am interested in making the results of one index search (in particular the values of fields early and late) used in a different index search as values assigned to earliest latest.  index="a" <find a specific event> | eval timeTOsecs=strftime(_time, "%s") | eval early_time= timeTOsecs-300 | eval late_time= timeTOsecs+300 | eval early=strftime(early_time, "%m/%d/%Y:%H:%M:%S") | eval late=strftime(late_time, "%m/%d/%Y:%H:%M:%S") My next search would search for all events using the early and late values of the previous search and assign them to earliest latest. index="b" earliest=early latest=late Everything I have tried up to this point seems to point to "earliest" and "latest" modifiers will not allow you to assign a field value to them.  Essentially I want to perform the function that Splunk automates through its UI when it lets the user run a search on events before and after a given time.    Thanks for anyone that can help me and let me know if I can be clearer in explaining because sometimes it is hard to understand other people's context.
Hi, What I am trying to do, is to determine from a lookup table whether we have a maintenance window active in order to effectively disable a number of alerts. Excluding the log lines from the sea... See more...
Hi, What I am trying to do, is to determine from a lookup table whether we have a maintenance window active in order to effectively disable a number of alerts. Excluding the log lines from the searches is not an option, because the alerts will interpret that as an error situation since the successfull cases would be missing. I already have a lookup table containing the start and end times for the maintenance windows. The following produces promising results:   | inputlookup maintenancetimes.csv | convert timeformat="%Y/%m/%d %H:%M:%S %p" mktime(MaintStart) mktime(MaintEnd) | eval Break=if( now() > MaintStart AND now() < MaintEnd, "Yes", "") | sort -Break | return 500 Break   The result is   (Break="Yes") OR (Break="")   Which I interpret as presence of both active and inactive maintenance windows. However, when I am trying to use the data from a subsearch, it isn't doing what I want.   | makeresults count=2 annotate=true | eval IsBreak=if(match([ | inputlookup maintenancetimes.csv | convert timeformat="%Y/%m/%d %H:%M:%S %p" mktime(MaintStart) mktime(MaintEnd) | eval Break=if( now() > MaintStart AND now() < MaintEnd, "Yes", "") | sort -Break | return 500 $Break ],"Yes"),1,0) | table IsBreak _time   The results show 0 as the value of IsBreak, and I can't figure out why. The intention is of course to utilize this as a part of a more complicated search/alert. What am I doing wrong? Best regards, Petri
I would like to download splunk Enterprise 6.6.0 but can't find it in older versions. A few months ago I found it. Where can I find this version?
We are planning to achieve ISO27001 (open data exchange) for that we need to achieve specific auditing requirements, so do we have any app/addon in Splunk which will have dashboards/compliance to val... See more...
We are planning to achieve ISO27001 (open data exchange) for that we need to achieve specific auditing requirements, so do we have any app/addon in Splunk which will have dashboards/compliance to validate and thus making it to ISO27001 compliant.
Hello, We receive web access logs in Splunk. I created a report in Splunk that aggregates the data( web access logs) , information like total number of calls and total number of error calls per cus... See more...
Hello, We receive web access logs in Splunk. I created a report in Splunk that aggregates the data( web access logs) , information like total number of calls and total number of error calls per customer. I saw that I can easily extract the data in JSON format from the report using the Splunk UI but I need to do this programmatically cause I need afterwards to send the file to a different place. How can I achieve this? Thank you, Andrei
Hello Splunk Community, I am kind of beginner in Splunk. Need help on a scenario I have below example logs   2020-08-20 08:52:46, 760 XYZ_Processor/1.1.0 Application Process Completed 2020-08-20... See more...
Hello Splunk Community, I am kind of beginner in Splunk. Need help on a scenario I have below example logs   2020-08-20 08:52:46, 760 XYZ_Processor/1.1.0 Application Process Completed 2020-08-20 08:51:46, 760 XYZ_Processor/1.1.0 Random logs 2020-08-20 08:50:46, 760 XYZ_Processor/1.1.0 Random logs 2020-08-20 08:47:46, 760 XYZ_Processor/1.1.0 Application Process Id generated : 23232 2020-08-20 08:40:46, 760 XYZ_Processor/1.1.0 Application Process Completed 2020-08-20 08:39:46, 760 XYZ_Processor/1.1.0 Random logs 2020-08-20 08:38:46, 760 XYZ_Processor/1.1.0 Random logs 2020-08-20 08:37:46, 760 XYZ_Processor/1.1.0 Application Process Id generated : 42343   I want below results PID                     START_TIME                          END_TIME                             TIME_TAKEN 42343        2020-08-20 08:37:46      2020-08-20 08:40:46                   03:00:00 23232        2020-08-20 08:47:46      2020-08-20 08:52:46                   05:00:00   Could anyone help in this? I have to add PID as first field from the logs and print in first column and then start time and end time of the process and then the time taken. Thank you in advance.