All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I recently created a Summary Index to use with some planned dashboards. To generate the Summary Index I run a report each night with Time Range set to Yesterday, "bucket _time span=day" to summarize ... See more...
I recently created a Summary Index to use with some planned dashboards. To generate the Summary Index I run a report each night with Time Range set to Yesterday, "bucket _time span=day" to summarize each day into one entry, then add it to the Summary Index. Right now I wish I had more historical data in that Summary Index so I'm wondering if its OK to establish the Summary Index freshly, perhaps with a timeframe of Last 30 Days or Last 45 Days, then the next day update the report schedule to look just for Yesterday and continue on like that.
Right now I have a Syslog Server Sending me security events. The Syslog server is sending the data with TLS encryption.  I have the PEM file, so that Splunk can do the three way handshake and accep... See more...
Right now I have a Syslog Server Sending me security events. The Syslog server is sending the data with TLS encryption.  I have the PEM file, so that Splunk can do the three way handshake and accept my data. My question is, where do I put that .PEM file. Currently my Inputs.conf file looks like this:    [tcp-ssl:520] serverCert = $SPLUNK_HOME/etc/auth/mycerts/myCert.pem sslPassword = PASSWORD   My Server.conf file looks like this:    [sslConfig] enableSplunkdSSL = true sslPassword = $************************** sslRootCAPath = $SPLUNK_HOME/etc/auth/mycerts/myCert.pem   My certificate is stored in C:\Program Files\Splunk\etc\auth\mycerts What am I missing? Any help is appreciated Thank you, Marco
I have a dashboard and some queries in the panels are taking longer than the allowed 60 seconds to complete.  They are using stats count but there are a lot of instances of events to count so it take... See more...
I have a dashboard and some queries in the panels are taking longer than the allowed 60 seconds to complete.  They are using stats count but there are a lot of instances of events to count so it takes some time. I'm looking at making the queries rely on summary indexes instead in order to speed them up. But in the meantime users of the dashboard passively get inconsistent results because they aren't aware the query exited before finishing. That is, data is rendered but its not clear to the user that its incomplete data. Is there a way in a dashboard to signal to the user that a panel reached the auto-finalize limit?  Right now I can click the "information"  icon ("i") and see this error: "The search auto-finalized after it reached its time limit: 60 seconds." But I'd like to detect and surface it, if its possible. Thanks!
How do i extract everything after the 3rd / from the left in: WinNT://PSAD/johndoe The output should be "johndoe" Thanks in advance for your assistance!
Hi all, im trying to send custom metrics to appdynamics saas controller using machine agent following this doc: https://docs.appdynamics.com/21.2/en/infrastructure-visibility/machine-agent/extension... See more...
Hi all, im trying to send custom metrics to appdynamics saas controller using machine agent following this doc: https://docs.appdynamics.com/21.2/en/infrastructure-visibility/machine-agent/extensions-and-custom-metrics/machine-agent-http-listener I'm using: * machineagent-bundle-64bit-linux-22.1.0.3252.zip * Rocky Linux release 8.4 (Green Obsidian) my controller.xml ./machine-agent $ ss -nplut tcp LISTEN 0 50 *:9999 *:* users:(("java",pid=945853,fd=197))   port is open   and i get 204 response when publishing but nothing shows up in the console can anyone help me?
Dear Support, We use X509 certificates provided by our customer certificate authority, in order to use HTTPS protocol for web pages and to encrypt the communication between instances in TLS 1.2. - ... See more...
Dear Support, We use X509 certificates provided by our customer certificate authority, in order to use HTTPS protocol for web pages and to encrypt the communication between instances in TLS 1.2. - Modification of the file /opt/splunk/etc/system/local/web.conf for the Web Pages - Modification of the file /opt/splunk/etc/system/local/server.conf for the encryption of the communication between the instances   If these certificates are expired, can you tell us if an issue is expected or if the solution will still work in a degraded mode, with warning messages indicating that the certificates are expired?   Thank you in advance for your answer. BR Malik GHALEB  
I have a dataset that looks like: (id, foo, bar, user) that I want to show results for on a dashboard. Given an input combination of values for foo and bar, I want to know which ids both     a) hav... See more...
I have a dataset that looks like: (id, foo, bar, user) that I want to show results for on a dashboard. Given an input combination of values for foo and bar, I want to know which ids both     a) have at least one row that has BOTH of those values; and     b) have at least one row that has NEITHER of those values and then count the number of such ids by user. For example, a search on (foo=A, bar=1) for the data id foo bar user 1234 A 1 admin 1234 B 2 admin 1234 C 3 other_user abcd A 1 admin abcd A 2 admin   would count 1234, but not abcd, and return user ids admin 1 other_user 1   Each search parameter can be a single value or a comma-separated list. Empty values are permitted in up to one field at a time. This is the closest I have been able to get: index="data" [     | tstats count where index="data" AND foo IN (A) AND bar IN (1) by id     | fields id ] AND NOT (foo IN (A) OR bar IN (1)) | fields id, user | stats dc(id) as ids by user I believe the query does what I want it to, but unfortunately am constrained by the hard limit of 10,500 results for subsearches. Is there a way to get the data I want without an intermediate command limiting my results?
Hello, Is there a simple way to render the Availability rate of a webpage in AppDynamics?  I found  SAM which doesn't exist anymore.  vincent
Hi, I am trying to monitor our Unix boxes (RedHat) without success. I deployed the universal forwarder following the instructions (https://docs.splunk.com/Documentation/Forwarder/8.2.4/Forwarder/Co... See more...
Hi, I am trying to monitor our Unix boxes (RedHat) without success. I deployed the universal forwarder following the instructions (https://docs.splunk.com/Documentation/Forwarder/8.2.4/Forwarder/Configuretheuniversalforwarder). I installed the RPM and register the deployment server  and the receiving indexer. ./splunk add forward-server <host name or ip address>:<listening port> ./splunk set deploy-poll <host name or ip address>:<management port> I correctly see the new linux box in slpunk web in the forwarder management. Then, I installed the Addon for Unix following the instructions too https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/Enabledataandscriptedinputs I copied the addon to the folder C:\Program Files\Splunk\etc\deployment-apps and deployed it to the linux box using the Splunk Web. (I created the server classed and assign the client and the TA_nix ) Then , I logged into the linux box and I enabled the data input using the command line  https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/Enabledataandscriptedinputs     /splunk cmd sh $SPLUNK_HOME/etc/apps/Splunk_TA_nix/bin/setup.sh --enable-all     I restarted the splunk forwarder as indicated in the instructions. Here is where I get lost... I dont see any mention to which index should the events go. I dont see any new index created by the addon so I created the index "os" myself. is this correct? I also added the index=os to all stanzas in the local/inputs.conf the events started to appear in the index "os". is this the way to do it? some other actions that I missed? Thanks a lot
Hell all,   In my organzation we are trying to collect logs from all Laptop/Desktop into Splunk. I read somewhere that we can use logs collected from AV agents instead of installing universal forwa... See more...
Hell all,   In my organzation we are trying to collect logs from all Laptop/Desktop into Splunk. I read somewhere that we can use logs collected from AV agents instead of installing universal forwarders. I We have CrowdStrike agents on all our endpoint devices.    s this right method? If so, what is the use cases where we may have to install UF on endpoint devices.   Thank you
every night my Server Crashes saying out of memory error however I have more than enough memory. In event logs get a : Unable to allocate dynamic memory buffer ,and Faulting application name: splunk... See more...
every night my Server Crashes saying out of memory error however I have more than enough memory. In event logs get a : Unable to allocate dynamic memory buffer ,and Faulting application name: splunkd  faulting module name: Ucrtbase.dll   Systems log has a low memory condition  looks like Splunk has a memory leak so how do i find out what is causing it ?      
Upgrading from Splunk 7.3.9 to Versions before 8.0.8 or 8.1.1 will fail. During the install process the installer will error with the following messages relating to the files: libxml2.dll libeay32... See more...
Upgrading from Splunk 7.3.9 to Versions before 8.0.8 or 8.1.1 will fail. During the install process the installer will error with the following messages relating to the files: libxml2.dll libeay32.dll ssleay32.dll                         After dismissing these messages, the installer rolls back and reverts to the previously installed version 7.3.9 This appears related to the dates on these files that the installer does not handle correctly and does not overwrite. At the end of the failed install (before rollback) these 3 files are missing from the $SPLUNK_HOME/bin folder I presume the installer removes (backs up) the original files and during the install process validates that the files to be installed are newer. In the case of these 3 files, the situation is not handled correctly and the installer fails to correctly remedy the situation.  
Hello experts,  If I have only IP address of  hosts from a search, how do I look for its hostname from a lookup table? Let say, I search, index=network_device.    I have a lookup table that contai... See more...
Hello experts,  If I have only IP address of  hosts from a search, how do I look for its hostname from a lookup table? Let say, I search, index=network_device.    I have a lookup table that contains IP address and host names of all assets.
Hello Splunkers,  Is it possible to go back to the classic experience once you upgraded to victoria experience?
Hello could you please help me to have better understanding of UF. Can we still use Splunk UF even after the end date (license )  i.e Splunk agent still forward the data to QRadar?
Hello, i am aware that there already is a Question from way back called: "finding peak and low times from timechart" However in that solution i only can get max and min values overall.   i tried... See more...
Hello, i am aware that there already is a Question from way back called: "finding peak and low times from timechart" However in that solution i only can get max and min values overall.   i tried to adapt the solution for my issue. Here it goes... I have multiple customers and want to find peaks for everyone of them. Whilst the solution: index=web GET OR POST | timechart span=1h count | eventstats max(count) as high, min(count) as low | where (count=low OR count=high) | fields _time, count works perfectly for overall peaks i struggle to get it flying with an "by" command for customers...so something like: | timechart span=1hour count  by customer | eventstats max(count) as high, min(count) as low by customer at this point there however is no field "count" anymore Kind regards, Mike  
Hi, I've created an alert for one of my main API service and how it works is, it runs every 30 mins, looks into failure rate and failed requests and then based on the threshold which is (failedReques... See more...
Hi, I've created an alert for one of my main API service and how it works is, it runs every 30 mins, looks into failure rate and failed requests and then based on the threshold which is (failedRequest > 200 AND failurerate > 10%), it triggers the alert and raises a incident. Now there are times when during those 30 mins, there is a short blip of 5 mins with large number of errors and for the rest of the time it was normal. Now in that case as well the alert gets fired because it meets the threshold. How can i avoid that? Is it possible to look for number of errors and if they are consistent for like 20 or 30 mins and if they are then trigger the alert? How can i achieve that? Here is my sample query - Let me know if anyone can advice on this. It will be immensely helpful. index=myapp_prod source=myapp "message.logPoint"=OUTGOING_RESPONSE (message.httpResponseCode=50* OR message.httpResponseCode=20*) | rename message.serviceName as serviceName message.httpResponseCode as httpResponseCode | where(serviceName LIKE "my-service") | stats count as totalrequests count(eval(httpResponseCode=200)) as successrequest count(eval(httpResponseCode=500 OR httpResponseCode=502 OR httpResponseCode=503)) as failedrequest | eval Total = successrequest + failedrequest | eval failureRatePercentage = round(((failedrequest/Total) * 100),2) | where failureRatePercentage > 10 AND failedrequest > 200  
Good Morning, I've followed guides/forums and steps on this site but still cant get my blacklists to work at all. The situation is that I've set up Splunk Alert Monitor dashboard and one of the aler... See more...
Good Morning, I've followed guides/forums and steps on this site but still cant get my blacklists to work at all. The situation is that I've set up Splunk Alert Monitor dashboard and one of the alerts is new process starts, the splunk forwarder is causing hundreds of alerts on this so I want to blacklist it. Firstly could someone please confirm which inputs.conf to edit as there are multiple, secondly is this order correct?  [WinEventLog://Security] disabled=0 current_only=1 blacklist = 4689,5158  i.e. is the blacklist option in the right place? There are a few other lines on the inputs.conf I've found, like oldest first. Finally what string will actually work and stop me seeing all processes started by Splunk?   Thank you  in advance. 
Hello, we would like to use the rising input mode for a dbconnect (2.x) query. Unfortunately, the destination table is an Oracle table and it only has a date field that could be used as rising colu... See more...
Hello, we would like to use the rising input mode for a dbconnect (2.x) query. Unfortunately, the destination table is an Oracle table and it only has a date field that could be used as rising column, but, if I'm right with the documentation, this may lead to duplicates. Is this correct? Is there any other way to solve this problem without modifying the source table? Thanks
I need to add an export button on the top of a dashboard, simply one that recall the built in function of splunk that let you select the name,  type of file and number of event saved. I found the el... See more...
I need to add an export button on the top of a dashboard, simply one that recall the built in function of splunk that let you select the name,  type of file and number of event saved. I found the element that can recall that function, by typing it in the console i can call it but I have problems to integrate it inside a button i can put in my dashboard. Do anyone have any idea on how to do it? The function to call it is:   document.getElementsByClassName("btn-pill export")[0].click()     Any help will be appreciated. Thanks