All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, We have installed the alert manager app on SH and it is working fine for admin but it is not working for user. When user login and access the Alert Manager app and it is fetching the d... See more...
Hi Team, We have installed the alert manager app on SH and it is working fine for admin but it is not working for user. When user login and access the Alert Manager app and it is fetching the dashboard and view from another app. Can anyone faced such a kind of issue.
Hi, We have a single server machine, where Splunk enterprise edition is installed. Configurations, CPU - 1 Cores - 8 RAM - 32 GB We have implemented several dashboards, charts and table... See more...
Hi, We have a single server machine, where Splunk enterprise edition is installed. Configurations, CPU - 1 Cores - 8 RAM - 32 GB We have implemented several dashboards, charts and tables. Also, where more than 5 users will concurrently access these datas. Due to which, we faced lot of performance issue, such as waiting for queue or maximum concurrent search was reached. Because of which we were forced to change the default configuration in limit.conf and update the following attributes, max_searches_per_cpu=4 base_max_searches=6 Now, according to the formula, No of Concurrent searches = 4 *(1*8) + 6 = 38 (as per my understanding it will handle 38 searches concurrently) After the changes, Where it reaches max of 56 concurrent searches and CPU usage of around 80 to 90 percent. Questions, Is this configuration is recommended? please suggest the nos for the attributes or alternate approach for the current system configurations? Many thanks.
Hi all, We are trying to analyze the Syslog from Polycom, such as server.log, access.log, etc. However, we don't understand the content of those logs. Is there any place we can get a detail... See more...
Hi all, We are trying to analyze the Syslog from Polycom, such as server.log, access.log, etc. However, we don't understand the content of those logs. Is there any place we can get a detailed description of those logs?
Consider the result of splunk query Name| path Path1 | \abc\p1 Path2 | \abc\p2 I want to click and navigate to the paths . For weburls i can achieve using $click.value2|n$ But in the ca... See more...
Consider the result of splunk query Name| path Path1 | \abc\p1 Path2 | \abc\p2 I want to click and navigate to the paths . For weburls i can achieve using $click.value2|n$ But in the case of folders and shared paths how to achieve this???
I have created a search to match search results for users to users in a lookup: | inputlookup AD_User_LDAP_list append=true where OU IN ("staff", "contractors") cn!=DEL cn!=Qual* | fields sAMAc... See more...
I have created a search to match search results for users to users in a lookup: | inputlookup AD_User_LDAP_list append=true where OU IN ("staff", "contractors") cn!=DEL cn!=Qual* | fields sAMAccountName |eval matchfield=sAMAccountName |join matchfield [search index="windows_events" sourcetype=XmlWinEventLog source=XmlWinEventLog:Security host=dc* action=success user = . NOT user=.da NOT user=.sa NOT user=.fnpa NOT user=.fpa (EventCode=4624 OR EventCode=4634 ) |eval matchfield = user] | table user sAMAccountName What I am trying to accomplish now is to table users that are not matched with the lookup field. lookup field has 261 users search has found 208 users I want to display the 53 users that were not matched from the lookup field sAMAccountName TIA
Hi I am getting this error over and over again , any ideas 03-11-2020 11:16:36.630 +0100 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client hello C', alert_description=... See more...
Hi I am getting this error over and over again , any ideas 03-11-2020 11:16:36.630 +0100 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client hello C', alert_description='protocol version'. 03-11-2020 11:16:36.630 +0100 WARN HttpListener - Socket error from 127.0.0.1:45500 while idling: error:1408A10B:SSL routines:ssl3_get_client_hello:wrong version number 03-11-2020 11:16:39.415 +0100 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client hello C', alert_description='protocol version'. 03-11-2020 11:16:39.415 +0100 WARN HttpListener - Socket error from 127.0.0.1:45506 while idling: error:1408A10B:SSL routines:ssl3_get_client_hello:wrong version number 03-11-2020 11:16:42.158 +0100 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client hello C', alert_description='protocol version'. 03-11-2020 11:16:42.158 +0100 WARN HttpListener - Socket error from 127.0.0.1:45516 while idling: error:1408A10B:SSL routines:ssl3_get_client_hello:wrong version number 03-11-2020 11:16:44.866 +0100 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client hello C', alert_description='protocol version'. 03-11-2020 11:16:44.866 +0100 WARN HttpListener - Socket error from 127.0.0.1:45522 while idling: error:1408A10B:SSL routines:ssl3_get_client_hello:wrong version number 03-11-2020 11:16:47.663 +0100 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client hello C', alert_description='protocol version'. 03-11-2020 11:16:47.663 +0100 WARN HttpListener - Socket error from 127.0.0.1:45526 while idling: error:1408A10B:SSL routines:ssl3_get_client_hello:wrong version number 03-11-2020 11:16:50.440 +0100 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client hello C', alert_description='protocol version'. 03-11-2020 11:16:50.440 +0100 WARN HttpListener - Socket error from 127.0.0.1:45532 while idling: error:1408A10B:SSL routines:ssl3_get_client_hello:wrong version number 03-11-2020 11:16:53.164 +0100 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client hello C', alert_description='protocol version'. 03-11-2020 11:16:53.164 +0100 WARN HttpListener - Socket error from 127.0.0.1:45540 while idling: error:1408A10B:SSL routines:ssl3_get_client_hello:wrong version number 03-11-2020 11:16:55.882 +0100 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client hello C', alert_description='protocol version'. 03-11-2020 11:16:55.882 +0100 WARN HttpListener - Socket error from 127.0.0.1:45546 while idling: error:1408A10B:SSL routines:ssl3_get_client_hello:wrong version number Thanks in advance Robbie
Hello, I'm trying out the Splunk Http Event Collector but not able to make a request! curl "https://myinstance123.cloud.splunk.com/services/collector" \ -X POST \ -d "{\"_json\":\"hello wo... See more...
Hello, I'm trying out the Splunk Http Event Collector but not able to make a request! curl "https://myinstance123.cloud.splunk.com/services/collector" \ -X POST \ -d "{\"_json\":\"hello world\"}" \ -H "Authorization: Splunk myToken" I'm getting this <html> <head><title>406 Not Acceptable</title></head> <body bgcolor="white"> <center><h1>406 Not Acceptable</h1></center> <hr><center>nginx/1.14.1</center> </body> </html> I really need to test this not sure what to do. I can't disable the SSL from global setting as mentioned in docs. I also tried with 8088 port but not working.
Suppose I've a result table in splunk: Name | Path Path1 | \share\p1 Path2 | \share\p2 I want to click on the path in the table and navigate to the folder location. I'm able to do it it t... See more...
Suppose I've a result table in splunk: Name | Path Path1 | \share\p1 Path2 | \share\p2 I want to click on the path in the table and navigate to the folder location. I'm able to do it it the path is a web url using <link target="_blank">$click.value2|n$</link> But how can i achieve in case of a folder or shared path?
I get an error on the Forwarder Summary dashboard that I am not sure how to tackle. If I run a get request locally from the same machine with the same URI and the same user, I get a response, only... See more...
I get an error on the Forwarder Summary dashboard that I am not sure how to tackle. If I run a get request locally from the same machine with the same URI and the same user, I get a response, only the UI seems to not be able to connect.
Hi, We are setting up splunk in AWS and we currently have a cluster with 1 Master, 3 indexers, 1 deployer, 3 searchheads(includes 1 captain). We do not use any forwarders and so enabled HEC in in... See more...
Hi, We are setting up splunk in AWS and we currently have a cluster with 1 Master, 3 indexers, 1 deployer, 3 searchheads(includes 1 captain). We do not use any forwarders and so enabled HEC in indexers. Now we are trying to setup a load balancer in front of indexers to send the data to be indexed. Do we have any recommended way to configure this ELB of rindexers with HEC and no forwarders in AWS? Below are few more questions I have Which kind of LB is appropriate for this usecase - ALB or NLB. I think ALB is the correct one as it supports HTTP and HTTPS How can we do the health checks - How can I configure ELB to have health checks for both the scenarios of 1. Indexer node going down and 2. Splunk/HEC going down in the indexer - in which case, the ELB should not route traffic to this node. Do I need to setup the ELB target as a lambda function to achieve this goal? Any help is highly appreciated
I added the HttpEventCollectorLogbackAppender to my log back and included logstash encoder in my API. Then I tried to start my application and I am getting the below errors Exception in thread "mai... See more...
I added the HttpEventCollectorLogbackAppender to my log back and included logstash encoder in my API. Then I tried to start my application and I am getting the below errors Exception in thread "main" java.lang.IllegalStateException: Logback configuration error detected: ERROR in ch.qos.logback.core.joran.spi.Interpreter@327:66 - no applicable action for [encoder], current ElementPath is [[configuration][appender][encoder]] ERROR in ch.qos.logback.core.joran.spi.Interpreter@328:17 - no applicable action for [fieldNames], current ElementPath is [[configuration][appender][encoder][fieldNames]] ERROR in ch.qos.logback.core.joran.spi.Interpreter@329:15 - no applicable action for [version], current ElementPath is [[configuration][appender][encoder][fieldNames][version]] ERROR in ch.qos.logback.core.joran.spi.Interpreter@330:18 - no applicable action for [levelValue], current ElementPath is [[configuration][appender][encoder][fieldNames][levelValue]] ERROR in ch.qos.logback.core.joran.spi.Interpreter@332:23 - no applicable action for [timestampPattern], current ElementPath is [[configuration][appender][encoder][timestampPattern]] ERROR in ch.qos.logback.core.joran.spi.Interpreter@333:32 - no applicable action for [shortenedLoggerNameLength], current ElementPath is [[configuration][appender][encoder][shortenedLoggerNameLength]] ERROR in ch.qos.logback.core.joran.spi.Interpreter@334:24 - no applicable action for [includeCallerData], current ElementPath is [[configuration][appender][encoder][includeCallerData]] ==================================== I added the following in my logback.
I have an event having 3 errors.. I have a regular expression written to capture the error as "ERROR". And now i have a lookup file and I input the ERROR value and output Comments for the respectiv... See more...
I have an event having 3 errors.. I have a regular expression written to capture the error as "ERROR". And now i have a lookup file and I input the ERROR value and output Comments for the respective error. I do not have issues when there is just one value for ERROR field in one event(i.e., if there is only one error in a event) But when there are more than one error,then i get the result as below. Kindly help..
I’ve got trouble to build a top with log source, where the from value is given by "from*:" and not “from=*" That way I just got 0 results, using “index="myindex" "from:"| top limit=10 from” I’ve... See more...
I’ve got trouble to build a top with log source, where the from value is given by "from*:" and not “from=*" That way I just got 0 results, using “index="myindex" "from:"| top limit=10 from” I’ve tried with rename and replace, but no success. How can I tell Splunk that the delimiter is “:” and not “=” for the search value? Or how can I replace/rename the result and then build a top over the result?
Hello, I would like to get the link to the alert results under a variable, possibly already during the alert base search (at the end of it). Is it possible? Basically I need sth like what I get fr... See more...
Hello, I would like to get the link to the alert results under a variable, possibly already during the alert base search (at the end of it). Is it possible? Basically I need sth like what I get from Activity --> Triggered Alerts --> View Results, e.g.: https://splunk-ml.zone1.mo.sap.corp/en-US/app/mlbso/search?sid=scheduler__d046266__mlbso__RMD588cf20a54cf83ccb_at_1583910420_58750& ..... etc, etc. but already at the end of the alert search, that I can set a variable out of it. The reason is, that I need to integrate my alerts to another tool and there I have a very limited possibility of using texts, so there is no chance to build the output like in Splunk. What I thought would be best, was to pass the link to the alert results that the alert processor can access splunk directly. For that I need this result link in some kind of variable set with eval ... Is it possible? Kind Regards, Kamil
Hi Everyone! I currently want to achieve the following: I want to get a data processed on transaction (an example would be employee ID) from a business transaction. I'm not quite sure if "Analytic... See more...
Hi Everyone! I currently want to achieve the following: I want to get a data processed on transaction (an example would be employee ID) from a business transaction. I'm not quite sure if "Analytics" module is involved at this point. For example, the business transaction was made through HTTP, and the call has a field named employeeID with value of 112233. How can I breakdown the transaction and search for employeeID? With the goal achieved above, how do I use it to query on ADQL analytics? I've been trying to figure out data collections. It's still quite unclear to me however, I understand that I need the class and method name to implement a data collector. How about the data that has been passed through HTTP? Is there any way to "dissect" the HTTP call? 
Hello, everybody! I have Splunk Enterprise 7.3.2 infrastructure with Splunk UF's deployed particularly to our corporate on-premises Microsoft Exchange Server 2016 servers. We have 40 Exchange serv... See more...
Hello, everybody! I have Splunk Enterprise 7.3.2 infrastructure with Splunk UF's deployed particularly to our corporate on-premises Microsoft Exchange Server 2016 servers. We have 40 Exchange servers with really huge mail flow and we want to collect all the Message Tracking log into Splunk for future investigations. We do not have a license for Splunk App for Microsoft Exchange / Splunk Add-on for Microsoft Exchange, so I wrote a simple custom app for UF with inputs.conf / props.conf to reach my goal. Microsoft Exchange Server Message Tracking log files are CSV-files with first 4 commented # lines, then 5th commented filed headers line preceded with #Fields: , next data lines. Detailed format description can be find here https://docs.microsoft.com/en-us/exchange/mail-flow/transport-logs/message-tracking?view=exchserver-2019 I deployed the following inputs.conf to my UF's: [monitor://C:\Queue\TransportLogs\MessageTracking] disabled = 1 time_before_close = 30 sourcetype = my_exchange_logs_message_tracking ignoreOlderThan = 1d crcSalt = <SOURCE> whitelist = \.log$|\.LOG$ # blacklist = I deployed the following props.conf to my UF's: [my_exchange_logs_message_tracking] disabled = false SHOULD_LINEMERGE = false MAX_TIMESTAMP_LOOKAHEAD = 25 INDEXED_EXTRACTIONS = CSV FIELD_HEADER_REGEX = ^#Fields:\s*(.*) # HEADER_FIELD_LINE_NUMBER = 5 PREAMBLE_REGEX = ^#.* HEADER_FIELD_DELIMITER = , FIELD_DELIMITER = , FIELD_QUOTE = " TRUNCATE = 0 Everything seems to be working fine, but sometimes I see my INDEXED_EXTRACTIONS do not work for some source log file lines. I see two problems while looking to the events with SH: For some source lines I do not see any fields according to headers extracted and indexed For some other source lines I see fields extracted and indexed not to header line field names but to set of EXTRA_FIELD_## fields. Moreover, if one row fails configured extractions with described effect - all the future lines till the EOF. I carefully checked the "problem" lines and actually found no problems to these! I also put a test but the same configured inputs.conf / props.conf to another server, brought the "problem" source files here - and these got indexed with no problems. With that written I expect my UF's on production Exchange Servers sometimes maybe have some problems, that prevent processing configured field extractions. I took a look into _internal but did not mention anything suspicious according to field extractions. Does maybe anybody have any ideas what should I check next to catch and fix the problem?
Hi I have log files like this: log.machine03.20200310.bz2 log.machine04.20200310.bz2 log.machine05.20200310.bz2 these files copy each day to /opt , and splunk continuously monitor /opt. ... See more...
Hi I have log files like this: log.machine03.20200310.bz2 log.machine04.20200310.bz2 log.machine05.20200310.bz2 these files copy each day to /opt , and splunk continuously monitor /opt. now how can I use log filename to set "host" "date" of logs in splunk? FYI: in every line of log file only store time NOT date! that's why I need to use date that exist in file name. e.g. file name = log.machine05.20200310.bz2 01:00:00 info logmessage 02:00:00 info logmessage 03:00:00 info logmessage ... Thanks,
Hi, I am new to Splunk dashboard development, so far I am creating KPI's using just 'single value'. I have three KPI's that resulted in 600, 250, 150. KPI 1 search expression - Result is 600... See more...
Hi, I am new to Splunk dashboard development, so far I am creating KPI's using just 'single value'. I have three KPI's that resulted in 600, 250, 150. KPI 1 search expression - Result is 600 (example) index=indexname kubernetes.container_name=name1 MESSAGE = "*search for code1*" | spath output=msg path=MSG | table _time msg | stats count as count1 KPI 2 search expression - Result is 250 (example) index=indexname kubernetes.container_name=name2 MESSAGE = "*search for code2*" | spath output=msg path=MSG | table _time msg | stats count as count2 KPI 3 search expression - Result is 150 (example) index=indexname kubernetes.container_name=name3 MESSAGE = "*search for code3*" | spath output=msg path=MSG | table _time msg | stats count as count3 I have shown above KPI's as numbers in the dashboard. However, I would like show a pie chart with 60%, 25% and 15% share for above numbers. Could you anyone please help me what would be search expression to create this chart? Thanks in advance. Raju
Example data : We need to extract below json data into table format in Splunk ?link text "assets": [ { "id": 1, "last_seen_time": "2020-02-26T16:23:06Z", ... See more...
Example data : We need to extract below json data into table format in Splunk ?link text "assets": [ { "id": 1, "last_seen_time": "2020-02-26T16:23:06Z", "network_ports": [ { "id": 100, "port_number": 111, "extra_info": "", "hostname": null, "name": "unknown", "ostype": "", "product": null, "protocol": "tcp", "state": "open", "version": null }, { "id": 343, "port_number": 444, "extra_info": "", "hostname": null, "name": "unknown", "ostype": "", "product": null, "protocol": "tcp", "state": "open", "version": null }, ], "tags": [ "Loc: Ajay" ], "owner": null, "urls": { "vulnerabilities": "google.com/examples/1012/tests" }, "ip_address": "1.1.0.91", "database": null, "hostname": "swetha", "asset_groups": [ { "id": 191300, "name": "All examples" } ] }, { "id": 1012, "last_seen_time": "2020-02-26T16:23:06Z", "network_ports": [ { "id": 331, "port_number": 135, "extra_info": "", "hostname": null, "name": "unknown", "ostype": "", "product": null, "protocol": "tcp", "state": "open", "version": null }, { "id": 343, "port_number": 444, "extra_info": "", "hostname": null, "name": "unknown", "ostype": "", "product": null, "protocol": "tcp", "state": "open", "version": null }, ], "tags": [ "Loc: NorthCEE" ], "owner": null, "urls": { "vulnerabilities": "google.com/examples/2/tests" }, "ip_address": "1.1.0.92", "database": null, "hostname": "sweety", "asset_groups": [ { "id": 191300, "name": "All exs" } ] }, ]
is it possible to install splunk 8.0.1 on windows server 2012 R2. Does Splunk 8.0 support windows server 2012 R2