All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, it seems like I'm unable to connect to Splunk Enterprise any longer I keep getting  This page isn’t working 127.0.0.1 didn’t send any data. ERR_EMPTY_RESPONSE  I've tried checking my... See more...
Hello, it seems like I'm unable to connect to Splunk Enterprise any longer I keep getting  This page isn’t working 127.0.0.1 didn’t send any data. ERR_EMPTY_RESPONSE  I've tried checking my firewall but still no change please help
Hi all, I can see the logs coming in from a particular source=das*.log through backend Linux but when I search with the same source I cannot see data in ui  One more thing if I use with index name ... See more...
Hi all, I can see the logs coming in from a particular source=das*.log through backend Linux but when I search with the same source I cannot see data in ui  One more thing if I use with index name and source also I am not getting any data in ui  Note: when I searched with internal index I could see logs from that host IP but not from the source in ui  Can any one help on this issue.    
Hi, I have a field name Details. This field contains a lot of information in varying format. e.g. software installed on endpoints, updates installed etc. I need to extract this information from thi... See more...
Hi, I have a field name Details. This field contains a lot of information in varying format. e.g. software installed on endpoints, updates installed etc. I need to extract this information from this field. Sample is below. What is the best approach? I need both from configuring field extraction for this in configs or in actual Splunk search using rex or eval. Fields to be extracted: Path Version/Installed Version: Both need to be extracted in a way that *Version* is used to cover variations. Method/Detection Method: Both need to be extracted in a way that *Method* is used to cover variations. Variation 1: <plugin_output> Path : /opt/AdoptOpenJRE/jdk8u332-b09-jre/ Version : 1.8.0_332 Binary Location : /opt/AdoptOpenJRE/jdk8u332-b09-jre/bin/java Details : This Java install appears to be Java Runtime Environment, since "jre" was found in the installation path and javac was not found (medium confidence). This Java install may be Oracle Java or OpenJDK Java due to "org.openjdk.java.util" in the binary (low confidence). Detection Method : "find" utility </plugin_output> Variation 2: <plugin_output> Path : /HP/hpoa/CADE2/HP/nonOV/openadaptor/1_6_5/classes/oa_jdk14_classes.jar Version : 1.1.0 JMSAppender.class association : Found JdbcAppender.class association : Found JndiLookup.class association : Not Found Method : MANIFEST.MF dependency </plugin_output> Variation 3: <plugin_output> Path : /opt/IBM/WebSphere855/AppServer/java_1.7_64/ Installed version : 7.0 Fixed version : 7.0.11.5 Path : /opt/IBM/WebSphere855/AppServer.old/java_1.7_64/ Installed version : 7.0 Fixed version : 7.0.11.5 Path : /opt/IBM/WebSphere855/AppServer.gagan/java_1.7_64/ Installed version : 7.0 Fixed version : 7.0.11.5 Path : /opt/IBM/InstallationManager/eclipse/jre_7.0.100001.20170309_1301/ Installed version : 7.0 Fixed version : 7.0.11.5 </plugin_output> Thanks in-advance!!
I am trying to create a simple bar chart to check and report status of a service.  Something similar to Intercom Status. (https://www.intercomstatus.com) The bar will show green if the service is... See more...
I am trying to create a simple bar chart to check and report status of a service.  Something similar to Intercom Status. (https://www.intercomstatus.com) The bar will show green if the service is up (1), and will show red if the service go down (2), in per day status.  Like a 0 (red) and 1(green).  any suggestions if this can be achieved with Splunk?  
Index=XYZ  source= abc*.logs host=kfg  So I when I checked in internal index data is coming from host, I checked forwarder server class mapping is fine, I could see the data is deploying. But still... See more...
Index=XYZ  source= abc*.logs host=kfg  So I when I checked in internal index data is coming from host, I checked forwarder server class mapping is fine, I could see the data is deploying. But still cannot see data. What other steps i need to follow to get data in index XYZ  
Splunk cant start
Hello, We are using Splunk with CAC / Smart Card authentication and want to add to our configuration the ability to map LDAP groups to roles within Splunk. What we'd like to have happen: * User ... See more...
Hello, We are using Splunk with CAC / Smart Card authentication and want to add to our configuration the ability to map LDAP groups to roles within Splunk. What we'd like to have happen: * User logs in with CAC / Smart Card authentication with PIN. * Splunk looks up the user in an LDAP directory to get their group memberships. * Splunk maps group membership into a role like "user" or "admin" within the application. CAC / Smart Card authentication means we've centralized our authentication. What we're looking for is to build on that to centralize authorization by using LDAP group membership to determine the correct permissions for each user. How Splunk is currently configured: * A web server like Apache is configured to require TLS client certificate authentication. * The web server find's the user's ID (or equivalent field within the TLS client certificate data). * The web server assigns that user ID to an HTTP header. e.g. `X-MY-REMOTE-USER-ID` * The web server reverse proxies the connection to the Splunk web application server. * The Splunk web application is configured, via `web.conf` , to use SSO with the `remoteUser` configuration setting to set the Splunk user based on the value of the HTTP header. Is there a way to achieve the configuration we're looking for? Here are our existing Splunk authentication configuration: `$SPLUNK_HOME/etc/system/local/web.conf` ``` [settings] SSOMode = strict enableSplunkWebSSL = true httpport = 8443 login_content = <div>REDACTED</div> privKeyPath = /path/to/key.pem remoteUser = X-MY-REMOTE-USER-ID remoteUserMatchExact = 1 serverCert = /path/to/tls/cert.pem tools.proxy.on = false trustedIP = 127.0.0.1 updateCheckerBaseURL = 0 keepAliveIdleTimeout = 270 server.thread_pool = 100 tools.sessions.timeout = 15 ``` `$SPLUNK_HOME/etc/system/local/authorization.conf` ``` # cat authentication.conf [authentication] authType = Splunk [splunk_auth] constantLoginTime = 0.000 enablePasswordHistory = 1 expireAlertDays = 15 expirePasswordDays = 60 expireUserAccounts = 1 forceWeakPasswordChange = 1 lockoutAttempts = 3 lockoutMins = 1440 lockoutThresholdMins = 15 lockoutUsers = 1 minPasswordDigit = 1 minPasswordLength = 15 minPasswordLowercase = 1 minPasswordSpecial = 1 minPasswordUppercase = 1 passwordHistoryCount = 5 verboseLoginFailMsg = 0 ```
I am working on something to return our alerts from rest functions. What I want to do is allow users to historically look at the alert query and see what adjustments can be made to certain items.  ... See more...
I am working on something to return our alerts from rest functions. What I want to do is allow users to historically look at the alert query and see what adjustments can be made to certain items.   | rest "/servicesNS/-/-/saved/searches" | search title="SomeAlert" | fields qualifiedSearch   From the search above, I want Splunk to run the qualifiedfieldsearch; which is the search string. Is this something that is possible?
How can i get the "last time" there was traffic on one of the services/for a particular client?
Hello all, Is there a way to sample resulting events from a transaction? Thanks!
Hi, I created a table using Splunk Dashboard Studio (Absolute).  However a column contains results like A, B, C, 0, 1. A, B and C display align left and 0 and 1 displays aligned right. I want all... See more...
Hi, I created a table using Splunk Dashboard Studio (Absolute).  However a column contains results like A, B, C, 0, 1. A, B and C display align left and 0 and 1 displays aligned right. I want all to be align left. When selecting code option to add align command, I keep getting error and it does not align left. How should I code this: "options": { "columnFormat": {"align": "left"} }
Hey Splunk People, I have tricky problem. I want to do the following in one search: 1. Search dhcp logs for a mac address and return all IP addresses that were assigned and the time range that ... See more...
Hey Splunk People, I have tricky problem. I want to do the following in one search: 1. Search dhcp logs for a mac address and return all IP addresses that were assigned and the time range that each IP address was assigned to that mac address (I have this part figured out). 2. Search a different index to get the domains each IP reached out to, but only for the time range that each IP address was assigned to that mac address. 3. Make a table of each IP address, the time range assigned to the mac address, and a list of all domains accessed during that time range. Can anyone figure this out?   Thanks, Dan
Hi splunkers, I am currently searching for a way to make the description in the dashboard in bullet form to make it more readable for the users. Is there a way to do this in the default version of v... See more...
Hi splunkers, I am currently searching for a way to make the description in the dashboard in bullet form to make it more readable for the users. Is there a way to do this in the default version of version 8.1.3? Thank you   <description>1) Select one from the "Select to View" dropdown button. 2) Input the desired month to be generated by placing what date to start from in the "From MM/DD/YYYY" input and the end period in the "To MM/DD/YYYY" input similar to the default values 3) Click the "Submit" button to generate the report (Note: Submit button must be used whenever selecting another option from "Select to View" dropdown or when a new Date Range is placed in the text input)</description>  
Looking for a conf files livreary sourcetyping many of vRealize Log Insight (aka VRLI aka vmware Realize Log Insight). Help me Ob
Hi I have a metric index that has multiple metric coming into it. I know i can run a command like this, but i have over 20 different types of metrics and they might change over time. I know i can... See more...
Hi I have a metric index that has multiple metric coming into it. I know i can run a command like this, but i have over 20 different types of metrics and they might change over time. I know i cant run count(*) as you have to specify.     | mstats count("mx.process.cpu.utilization") as count WHERE "index"="murex_metrics" span=10s | stats count       Then I tried, however, if the data is the same it will only give you a unique not a correct count.      | mpreview index=murex_metrics | stats count       So is there any command that will give me the stats count of a metric index quickly?  
Hi, currently i have left panel and right panel on my dashboard. Left panel is a list of Dashboard A,B and C links and right panel is main dashboard. Is it possible that I can click on let's say Dash... See more...
Hi, currently i have left panel and right panel on my dashboard. Left panel is a list of Dashboard A,B and C links and right panel is main dashboard. Is it possible that I can click on let's say Dashboard B link from left panel and the right panel gets refresh and display the Dashboard B? Appreciate if there is any code example that can share with me. Thank you. 
Hello Team, We are willing to understand the approach and the licensing requirements in order to install Splunk ES on clustering on Indexers and Search Head. Will we need an identical license on bot... See more...
Hello Team, We are willing to understand the approach and the licensing requirements in order to install Splunk ES on clustering on Indexers and Search Head. Will we need an identical license on both the clusters? Regards, Vikram Chabra Vikram@Metmox.com
HI all, can we see the past readings of a single value graph over a time range? like if at this moment the single value graph shows a value of 40 then after 10sec it becomes 50 and then 30 can we... See more...
HI all, can we see the past readings of a single value graph over a time range? like if at this moment the single value graph shows a value of 40 then after 10sec it becomes 50 and then 30 can we see all these points in a timechart or some other visualization. or is it possible to import that specific value in single value chart continuously using something like token and append it to some other graph?
Hi, I have a predefined: The original object has id = 123, the children object has the id = motherid + surfix, ex: 12356 I have a CSV file in Lookups like this: Id Type 12312 adult 1234... See more...
Hi, I have a predefined: The original object has id = 123, the children object has the id = motherid + surfix, ex: 12356 I have a CSV file in Lookups like this: Id Type 12312 adult 12345 children 12367 adult 12398 adult 12368 children 123985 elder 1239647 elder   How can I search for all Id belong to each type of an object, ex: type = adult, or type = children belong to object id = 123   Thanks in advanced!
Hello there. I'm having a performance problem. I have a "central UF" which is supposed to ingest MessageTracking logs from several Exchange servers. As you can guess from the "several Exchage server... See more...
Hello there. I'm having a performance problem. I have a "central UF" which is supposed to ingest MessageTracking logs from several Exchange servers. As you can guess from the "several Exchage servers" part, the logs are shared over CIFS shares (the hosts are in the same domain; to make things more complicated to debug, only the service account the UF runs with has access to those shares but my administrator account doesn't :-)). Anyway, since there are several Exchange instances and each of the directories has quite a lot of files the UF sometimes gets "clogged" and - especially after restart - needs a lot of time to check all the logfiles, decide that it doesn't need to ingest most of them and start forwarding real data. To make things more annoying, since the monitor inputs are the same that are responsible for ingesting forwarder's own logs, until this process completes I don't even have _internal entries from this host and have to check the physical log files on the forwarder machine to do any debugging or monitoring. The windows events, on the other hand, get forwarded right from the forwarder restart. So I'm wondering whether I can do anything to improve the efficiency of this ingestion process. I know that the "officailly recommended" way would be to install forwarders on each of the Exchange servers and ingest the files straight from there but due to organizational limitations that's out of the question (at least at the moment). So I'm stuck with just this one UF. I already raised thruput, but judging from the metrics.log it's not an issue of output throttling and queue blocking. I raised ingestion pipelines to 2 and my descriptors limit is set at 2000 at the moment. The typical single directory monitor input definition looks something like this: [monitor://\\host1\mtrack$\] disabled = 0 whitelist = \.LOG$ host = host1 sourcetype = MSExchange:2013:MessageTracking index = exchange ignoreOlderThan = 3d _meta=site::site1  And I have around 14, maybe 16 of those to monitor. Which means that when I do splunk list inputstatus I'm getting around 500k files (most of them get ignored but they have to be checked first for modification time and possibly for CRC)! I think I will have to tell the customer that it's simply beyond the performance limits of any machine (especially when doing all this file stating over the network) but I was wondering if there are any tweaks I could apply even in this situation.