All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

All, I am having an issue with splunk and uploading Pcap files to the server we are running on prem. When I attempt to upload a Pcap file via the data inputs option I receive the following error.  ... See more...
All, I am having an issue with splunk and uploading Pcap files to the server we are running on prem. When I attempt to upload a Pcap file via the data inputs option I receive the following error.  Encountered the following error while trying to save: Splunkd daemon is not responding: ('Error connecting to /servicesNS/admin/launcher/data/inputs/upload_pcap: The read operation timed out',) My thinking is it could be too big and is timing out the daemon. If that is the case can this timeout be edited?  Pcap files are in the 139MB size range.  Thank you for any recommendations or insight. 
If an HF is used for a intermediate / aggregation tier and the data is parsed,  what does the ingestion pipeline look like when it hits the indexer.  That is, if the HF does parsing, aggregation, typ... See more...
If an HF is used for a intermediate / aggregation tier and the data is parsed,  what does the ingestion pipeline look like when it hits the indexer.  That is, if the HF does parsing, aggregation, typing, but not indexing, does the data flow through those same queues at the indexer? Or is the data injected directly in the the indexing queue?
Hi Everyone, Im trying to stop the following index from being indexed into Splunk using the props/transforms confs  on HF but with no luck - What am i doing wrong here ?  props.conf [pan:userid... See more...
Hi Everyone, Im trying to stop the following index from being indexed into Splunk using the props/transforms confs  on HF but with no luck - What am i doing wrong here ?  props.conf [pan:userid] TRANSFORMS-set-nullqueue=set_nullqueue transforms.conf [set_nullqueue] REGEX=. DEST_KEY=queue FORMAT=nullQueue   Thank you!!
All, The OOTB Kafka JMX configuration is incorrect.  Maybe it was correct for an older version of the Kafka client library, but it's not correct any longer.  For example, there are metrics defined u... See more...
All, The OOTB Kafka JMX configuration is incorrect.  Maybe it was correct for an older version of the Kafka client library, but it's not correct any longer.  For example, there are metrics defined using the object name match pattern "kafka.consumer:type=consumer-fetch-manager-metrics,*" such as records-lag-max.  However records-lag-max is not an attribute of any object matching that pattern. I have fixed this problem manually and disabled the OOTB configuration.  My new configuration uses the object name match pattern "kafka.consumer:type=consumer-fetch-manager-metrics,client-id=*,topic=*" and the instance identifier "client-id,topic."  My new metrics do show up in the metric browser.  If the client id is myClientId and the topic is myTopic, the metrics show under JMX|ConsumerMetrics|myClientId|myTopic. However I cannot seem to use the new metrics in a dashboard.  When adding a time series to a dashboard, I can choose "JMX|ProducerMetrics" or "JMX|ConsumerMetrics" as the JMX objects to include.  But when selecting the actual metric to display, I can't see any of my new metrics.  I can only see the OOTB metrics like Average IO Wait.  When selecting the JMX objects to include or when selecting the metrics to display, I cannot drill down further than ProducerMetrics or ConsumerMetrics, even though there are two levels below (corresponding to the client id and topic.) thanks
Hey everyone, I want to create a search that gives me the following information in a structured way: Which type of host sends data to which type of host using which port? In a table it would basical... See more...
Hey everyone, I want to create a search that gives me the following information in a structured way: Which type of host sends data to which type of host using which port? In a table it would basically look like this: typeOfSendingHost|typeOfReceivingHost|destPort At the moment I have the following search, which shows me which type of host is listening on which port. The subsearch is used to provide the type of system based on splunkname. Therefore, the field splunkname is created in the main search. (index="_internal" group=tcpin_connections) |rename host AS splunknames |join type=left splunkname [|search index=index2] |stats values(destPort) by type Example Output: type values(destPort) Indexer 9995, 9997 Intermediate Forwarder 9996, 9997   In the _internal index, the sending system is stored in the field "hostname" and the receiving system is stored in "host". The field "destPort" is the port to which data is sent. Information about our systems is stored in index2. An event in index2 has the field "splunkname" and "type". The field "splunkname" in index2 contains the hostname of the system (e.g. fields hostname/host). The field "type" stores the type of the system (Forwarder, Indexer, Search Head...). My question is, how can I make the results look like this? Sending System Type Receiving System Type destPort Intermediate Forwarder Indexer 9997 Thank you so much in advance
Where do I set columns to wrap text?  The old dashboards had a wrap results field.
I have the Service Now add-on for Splunk installed and I'm referencing this document for configuring  ServiceNow as a trigger action. Here's a screenshot from the doc for reference:   My questi... See more...
I have the Service Now add-on for Splunk installed and I'm referencing this document for configuring  ServiceNow as a trigger action. Here's a screenshot from the doc for reference:   My question is, can steps 7 and 8 be done via the Splunk the API? I have about 100 alerts and what I'd like to do is perform steps 7 and 8 programmatically (Where I create a trigger action that uses ServiceNow Incident Integration and populates some of the values)
Hi, - We are currently having Splunk On-prem with single site (Site A) Version: 7.3.4     10 Indexers, 5 SH's, 2 DS, 2 HF's, CM, DMC, LM (Distributed Clustered) - We have recently build anothe... See more...
Hi, - We are currently having Splunk On-prem with single site (Site A) Version: 7.3.4     10 Indexers, 5 SH's, 2 DS, 2 HF's, CM, DMC, LM (Distributed Clustered) - We have recently build another site  (Site B) Version 9.0.1    10 Indexers, 4 SH's, 2 DS, 2 HF's, CM, DMC, LM (Distributed Clustered) I have few below questions How can we transfer/move the entire data from our Site A to our new site B? What is the order to follow? what are the Pre-Req's? What would be the expected down time for this migration process? is there any clear implementation documentation/steps? Thanks in advance and your help would be highly appreciate!
I have two searches that will return orderNumbers 1. index=main "Failed insert" | table orderNumber //returns small list 2. index=main "Successful insert" | table orderNumber //returns huge l... See more...
I have two searches that will return orderNumbers 1. index=main "Failed insert" | table orderNumber //returns small list 2. index=main "Successful insert" | table orderNumber //returns huge list I want a list of "Failed insert" orderNumbers that have NOT had a "Successful insert" previously. How can I use the results of the second search to filter the results of the first search? 
Not sure if this is possible through Splunk query but what i am trying to do is basically retrieve field value from one search and pass it into another and same is to be done twice to get desired res... See more...
Not sure if this is possible through Splunk query but what i am trying to do is basically retrieve field value from one search and pass it into another and same is to be done twice to get desired result Lets consider below 3 different events as _raw data 14:06:06.932 host=xyz type=xyz type_id=123 14:06:06.932 host=xyz type=abc category=foo status=success 14:30:15.124 host=xyz app=test now 1st and second event are going into same index and sourcetype but 3rd event is in different index & sourcetype 1st and 2nd event are happening at exact same time. Expected result is to return following field values host type type_id category status app Below is my search in which i am able to successfully correlate and find category and status field from second event index=foo sourcetype=foo type=xyz |eval earliest = _time |eval latest = earliest + 0.001 |table host type type_id earliest latest |map search="search index=foo sourcetype=foo type=abc host=$host$ earliest=$earliest$ latest=$latest$ |stats values(_time) as _time values(type) as type values(category) as category values(status) as status by host |append [search index=foo sourcetype=foo type=xyz |stats values(type) as type values(type_id) as type_id by host] |stats values(*) as * by host the problem comes when i try to add another map search command to retrieve app value present in 3rd event. so basically following mapping should provide me those result |map search="search index=pqr sourcetype=pqr host=$host$ category=$category$ earliest=-1d latest=now|stats count by app And then this app value is to be searched in one of the lookup files to get some details. i have tried multiple ways to incorporate this into search but no luck. Any help is appreciated.
Hi Team, I am using field aliases as in my sourcetype i have two common fields (dest & dest_ip) which have same values. When i applied field aliases both were reflecting. How to avoid ... See more...
Hi Team, I am using field aliases as in my sourcetype i have two common fields (dest & dest_ip) which have same values. When i applied field aliases both were reflecting. How to avoid duplicate fields Kindly help in this scenario
I'm trying to figure out the percent of successful authentications from out vulnerability scans. There is a field named IP_Auth_Type and if I do a stats count by for that field I get the following: ... See more...
I'm trying to figure out the percent of successful authentications from out vulnerability scans. There is a field named IP_Auth_Type and if I do a stats count by for that field I get the following: Unix Failed  Unix Not Attempted Unix Successful Windows Successful I would like to add all those above-mentioned bullet points; add the Unix Successful and Windows Successful and divide it by the total of all the points.  This is what I have so far: | inputlookup vulnresults.csv | stats sum(Unix Failed) as UnixFailed_sum, sum(Unix Not Attempted) as UnixNotAttempted_sum, sum(Unix Successful) as UnixSuccessful_sum, sum(Windows Successful) as WindowsSuccessful_sum | eval total=UnixFailed_sum + UnixNotAttempted_sum + UnixSuccessful_sum + WindowsSuccessful_sum | eval ratio=(UnixSuccessful_sum + WindowsSuccessful_sum) / (total) | table NA_sum UnixFailed_sum UnixNotAttempted_sum UnixSuccessful_sum WindowsSuccessful_sum total ratio This doesn't bring any result, so any help would be greatly appreciated. 
Hello Splunkers, I would like to understand why a cert is need for the UF, when indexer already has requireClientCert disabled.  Thanks in advance. On indexer, we have the following inputs.conf stan... See more...
Hello Splunkers, I would like to understand why a cert is need for the UF, when indexer already has requireClientCert disabled.  Thanks in advance. On indexer, we have the following inputs.conf stanza configured: [splunktcp-ssl:9997] [SSL] serverCert = $SPLUNK_HOME/etc/auth/mycerts/myServerCert.pem sslPassword = mySecret requireClientCert = false   On the UF, we have the following outputs.conf stanza configured: [indexer_discovery:cm1] master_uri = https://cm1:8089 pass4SymmKey = mySecretSymmKey [tcpout] defaultGroup = ssl-test [tcpout:ssl-test] indexerDiscovery = master-es useACK = true useClientSSLCompression = false The UF failed to connect to the indexer with the following errors seen in the UF's splunkd.log: 02-11-2023 02:57:57.421 +0000 ERROR TcpOutputProc [1715593 TcpOutEloop] - target=x.x.x.x:9997 ssl=1 mismatch with ssl config in outputs.conf for server, skipping.. The issue is resolved once we have set the clientCert in forwarder's outputs.conf stanza: [tcpout:ssl-test] indexerDiscovery = master-es useACK = true useClientSSLCompression = false clientCert = $SPLUNK_HOME/etc/auth/mycerts/MyClientCert.pem   From our test so far, this requirement seems to be specific to splunktcp-ssl.  Inter-splunk communications between UF and deployment server or cluster manager (for indexer discovery) do not seem to require the client cert.      
We had a Splunk indexer crash out of nowhere, and this is the message I received before the crash in the logs. Encountered S2S Exception=Unexpected duplicate in addUncommittedEventId eventid=57 rec... See more...
We had a Splunk indexer crash out of nowhere, and this is the message I received before the crash in the logs. Encountered S2S Exception=Unexpected duplicate in addUncommittedEventId eventid=57 received from for data received from src=*.   What is the cause of this?
Hello, This is my very first post, so corrections are welcomed!  I am looking for a way to add Select/Deselect ALL in splunk Classic or if possible change the delimiter in Studio. I have a list ... See more...
Hello, This is my very first post, so corrections are welcomed!  I am looking for a way to add Select/Deselect ALL in splunk Classic or if possible change the delimiter in Studio. I have a list of ip/emails and i query them as multiselect from a lookup file. My issue is that in studio the delimiter is "," which does not work for me as i need OR/AND. For the classic, i did fixed it, but i have to select each one-by-one. Is there any fix/workaround for my issues? Help is much appreciated,  Thank you.
Hello, I would like to create a report about our daily exports. For each day I want to see, when the export started and when it ended. So on the X-axis I want to have a date, on Y-Axis the time. It... See more...
Hello, I would like to create a report about our daily exports. For each day I want to see, when the export started and when it ended. So on the X-axis I want to have a date, on Y-Axis the time. It should look like this: Additionally I would like to add a "Limit" line to show, when the export has to be ready at the latest.  How can I add the time on the Y-axis and a "limit" line on it? Thank you, Zuz  
While checking for the historical data for one of the KPI's in one of my glasstable 's  , it showed the latest alert_value for the global time range selected ,   tile is a single value visualization.... See more...
While checking for the historical data for one of the KPI's in one of my glasstable 's  , it showed the latest alert_value for the global time range selected ,   tile is a single value visualization. but my itsi_summary has multiple Alert_value values, which is updated by my KPI base search running every 5 min .  my global time range : 1 hour. glasstable tile is showing latest alert_value value from the 55 min to 60 min run data.  but idealy it should aggregate all the alert value according to service on alert_value and show final value in the tile (single value)
Hey There Folks, Im looking at a way to measure a decrease in logging levels by host and eventcode. Ive setup the below query which is fine from a static perspective but very much looking to try an... See more...
Hey There Folks, Im looking at a way to measure a decrease in logging levels by host and eventcode. Ive setup the below query which is fine from a static perspective but very much looking to try and identify that scenario where lets say we see a 50% log drop off in a specific eventcode in the past 24 hour period. index=wineventlog sourcetype=WinEventLog EventCode=4625 | fields EventCode, host | stats dc(host) as num_unique_hosts | where num_unique_hosts < 500 Is there an easy way to convert this over?    Thanks Kindly, Tom
I found this Index and Forward data into another splunk instance  and then found the current version of the referenced documentation:   Documentation - Splunk® Enterprise - Forwarding Data - Route an... See more...
I found this Index and Forward data into another splunk instance  and then found the current version of the referenced documentation:   Documentation - Splunk® Enterprise - Forwarding Data - Route and filter data , but I am still confused. We have a requirement to push data to another Splunk instance, outside out immediate network.   On which node of the Splunk cluster can I do this from ?  
Hello all, we use the following Cisco Apps, which are working fine in general. Cisco Networks App for Splunk Enterprise (https://splunkbase.splunk.com/app/1352) Cisco Networks Add-on for Splunk... See more...
Hello all, we use the following Cisco Apps, which are working fine in general. Cisco Networks App for Splunk Enterprise (https://splunkbase.splunk.com/app/1352) Cisco Networks Add-on for Splunk Enterprise(https://splunkbase.splunk.com/app/1467)   When I edit the dashboard of the Cisco Networks App, I can find the following macro which should lead to the possiblity to select a "tenant". Macro: `get_tenants_for_user_role($env:user$)` Expanded it looks like this:     inputlookup cisco_ios_tenants | stats values(index) AS index BY tenant_name,roles | eval index=mvjoin(index,",") | eval index=replace(index,","," OR index=") | eval index="index=" + index | search [| rest splunk_server=local /services/authentication/users/$user$ | fields roles]     Unfortunatelly there is no lookup (definition) named cisco_ios_tenants, not in the App nor the Addon. I also found in the default.xml nav that there should be a " <view name="cisco_networks_tenants" />" This does also not exist.   I'm wondering on how that tentant support works and how it could be configured. Does anymore has information about this? I was not able to find something.   Basically what we want to achieve (maybe there is a better way of doing it): Our network colleagues want to have the possiblity to select the data based on something like a location/region or something. As the tentant macro is implemented nearly on all dashboards, I think that could be something to solve the problem. Thanks in advance! Many Regards Michael