All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

  @luke_monahan  Followed you Prometheus app setup. On Heavy forwarder its up and listening on 8098 on Prometheus side seems ok to However in Prometheus Im been thrown errors on "Unsupported Med... See more...
  @luke_monahan  Followed you Prometheus app setup. On Heavy forwarder its up and listening on 8098 on Prometheus side seems ok to However in Prometheus Im been thrown errors on "Unsupported Media Type"   msg="non-recoverable error" count=9 exemplarCount=0 err="server returned HTTP status 415 Unsupported Media Type: <!doctype html><html><head><meta http-equiv=\"content-type\" content=\"text/html; charset=UTF-8\"><title>415 Unsupported Media Type</title></head><body><h1>Unsupported Media Type</h1><p>The requested URL does not support the media type sent.</p></body></html>"   IN prometheus.yml in remote_writes   - url: "https://myhost.mydomain.com:8089" authorization: credentials: "ABC123" tls_config: insecure_skip_verify: true write_relabel_configs: - source_labels: [__name__] regex: expensive.* action: drop    in Splunk HF:   [prometheusrw] port = 8098 maxClients = 10 disabled = 0 [prometheusrw://http_status] bearerToken = ABC123 index = prometheus whitelist = * sourcetype = prometheus:metric disabled = 0     Help Appreciated  
Hi! I'm having a real issue trying to get eventgen working. I'm trying to use the outputMode = s2s but it is bombing out with the below.     2021-07-28 15:06:42 eventgen ERROR MainProc... See more...
Hi! I'm having a real issue trying to get eventgen working. I'm trying to use the outputMode = s2s but it is bombing out with the below.     2021-07-28 15:06:42 eventgen ERROR MainProcess 'utf-8' codec can't decode byte 0xb3 in position 3: invalid start byte Traceback (most recent call last): File "/usr/lib/python3.7/site-packages/splunk_eventgen/eventgen_core.py", line 304, in _worker_do_work item.run() File "/usr/lib/python3.7/site-packages/splunk_eventgen/lib/outputplugin.py", line 39, in run self.flush(self.events) File "/usr/lib/python3.7/site-packages/splunk_eventgen/lib/plugins/output/s2s.py", line 204, in flush m["_time"], File "/usr/lib/python3.7/site-packages/splunk_eventgen/lib/plugins/output/s2s.py", line 173, in send_event e = self._encode_event(index, host, source, sourcetype, _raw, _time) File "/usr/lib/python3.7/site-packages/splunk_eventgen/lib/plugins/output/s2s.py", line 124, in _encode_event encoded_raw = self._encode_key_value("_raw", _raw) File "/usr/lib/python3.7/site-packages/splunk_eventgen/lib/plugins/output/s2s.py", line 78, in _encode_key_value return "%s%s" % (self._encode_string(key), self._encode_string(value)) File "/usr/lib/python3.7/site-packages/splunk_eventgen/lib/plugins/output/s2s.py", line 69, in _encode_string "utf-8" UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb3 in position 3: invalid start byte       My eventgen.conf file looks like this:     [cisco_asa.sample] mode = replay count = -1 timeMultiple = 1 sampletype = raw # outputMode = tcpout outputMode = s2s splunkHost = splunk_search splunkPort = 9997 source = udp:514 host = boundary-fw1 index = main sourcetype = cisco:asa # tcpDestinationHost = splunk_uf1 # tcpDestinationPort = 3333 token.0.token = \w{3} \d{2} \d{2}:\d{2}:\d{2} token.0.replacementType = replaytimestamp token.0.replacement = %b %d %H:%M:%S       It works fine with tcpout (the commented out bits above) but not as s2s.  I'm executing eventgen like this /usr/bin/python3.7 /usr/bin/splunk_eventgen -v generate /opt/splunk-eventgen/default/eventgen.conf The reason I'm using s2s is I'd like to generate sample data as if it's coming from many hosts, sources and sourcetypes and I can't do that if I'm using tcpout. In the above config, splunk_search is a standalone test splunk install. Sending directly to this splunk host via s2s fails. If I switch back to tcpout, then I'm sending to a Splunk UF with a tcpinput configured which is then sending to splunk_search via tcp/9997 eventgen was installed and configured as per http://splunk.github.io/eventgen/SETUP.html#install Any suggestions?
I am not able to connect to splunk web console using python sdk getting timedout error , I am suspecting port is not allowing, Could you please help my code -------------------------- import s... See more...
I am not able to connect to splunk web console using python sdk getting timedout error , I am suspecting port is not allowing, Could you please help my code -------------------------- import sys import getpass sys.path.append('splunk-sdk-python-1.6.16') import splunklib.client as client   def setServer(hostname, splunkuser, splunkpassword):     HOST = hostname     PORT = 8089     USERNAME = splunkuser     PASSWORD = splunkpassword     service = client.connect(host=HOST,port=PORT,username=USERNAME,password=PASSWORD)       #for app in service.apps:      #   print (app.name) # Get the collection of users     #users = service.users      #    print(users)   if __name__ == "__main__":     hostname = input("Enter Splunk Hostname/IP: ")     splunkUser = input("Enter Splunk Admin Username: ")     splunkPassword = getpass.getpass("Enter Splunk Admin Password: ")     setServer(hostname, splunkUser, splunkPassword) ----------------------------------------------------------------- output ----------------------- #python3 test66.py Enter Splunk Hostname/IP: prd-p-pivip.splunkcloud.com Enter Splunk Admin Username: sc_admin Enter Splunk Admin Password: Traceback (most recent call last):   File "/Users/sasikanth.bontha/test66.py", line 23, in <module>     setServer(hostname, splunkUser, splunkPassword)   File "/Users/sasikanth.bontha/test66.py", line 11, in setServer     service = client.connect(host=HOST,port=PORT,username=USERNAME,password=PASSWORD)   File "/usr/local/lib/python3.9/site-packages/splunklib/client.py", line 331, in connect     s.login()   File "/usr/local/lib/python3.9/site-packages/splunklib/binding.py", line 883, in login     response = self.http.post(   File "/usr/local/lib/python3.9/site-packages/splunklib/binding.py", line 1242, in post     return self.request(url, message)   File "/usr/local/lib/python3.9/site-packages/splunklib/binding.py", line 1259, in request     response = self.handler(url, message, **kwargs)   File "/usr/local/lib/python3.9/site-packages/splunklib/binding.py", line 1399, in request     connection.request(method, path, body, head)   File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1257, in request     self._send_request(method, url, body, headers, encode_chunked)   File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1303, in _send_request     self.endheaders(body, encode_chunked=encode_chunked)   File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1252, in endheaders     self._send_output(message_body, encode_chunked=encode_chunked)   File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1012, in _send_output     self.send(msg)   File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 952, in send     self.connect()   File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1419, in connect     super().connect()   File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 923, in connect     self.sock = self._create_connection(   File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/socket.py", line 843, in create_connection     raise err   File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/socket.py", line 831, in create_connection     sock.connect(sa) TimeoutError: [Errno 60] Operation timed out
Hello, I have created this search filter: index=reg host=mp1 "export_successful" | TIMECHART count by "import_successful". Out of this I have created a column chart for visualization - see below. ... See more...
Hello, I have created this search filter: index=reg host=mp1 "export_successful" | TIMECHART count by "import_successful". Out of this I have created a column chart for visualization - see below. A the moment it is visualized, if there is every day an successful export (every day there is just one), but I would like to see also, if the export was not successful. What is the easiest way to do it? Thank you, ava
I have an query that index ="main" |stats count by Text |sort -count | table count Text results: count Text 10 b'dog fish 20   dog cat   How can I drop " b' " prefix from the begi... See more...
I have an query that index ="main" |stats count by Text |sort -count | table count Text results: count Text 10 b'dog fish 20   dog cat   How can I drop " b' " prefix from the beginning of results (only from beginning , not replace into all string)
I am trying to run the following tstats search on indexer cluster, recently updated to splunk 8.2.1:     | tstats count where index=_internal by host       The search returns no results... See more...
I am trying to run the following tstats search on indexer cluster, recently updated to splunk 8.2.1:     | tstats count where index=_internal by host       The search returns no results, I suspect that the reason is this message in search log of the indexer:     Mixed mode is disabled, skipping search for bucket with no TSIDX data: \opt\splunkhot\_internaldb\db\hot_v1_4334       When I check the specified bucket folder, I can see the tsidx files inside.  Interesting fact is, that this issue occurs only with _internal index, same command works fine with other indexes. I have datamodel "Splunk's Internal Server Logs" enabled and accelerated. Any suggestions where to start troubleshooting this issue?
I have an query that index ="main" |stats count by Text |sort -count | table count Text results: count Text 10 dog fish 20    dog cat            How can I change the compare that... See more...
I have an query that index ="main" |stats count by Text |sort -count | table count Text results: count Text 10 dog fish 20    dog cat            How can I change the compare that compare first X chars into Text , for example first 4 chars , so "dog fish" and "dog cat" will be 1 line?   count Text 30 dog .....    
Hi  I have the following data that gives me the below graph, however, if the data stops coming in I want to see "black steps" to the rights, to show the use there is no more data. So ideally I want... See more...
Hi  I have the following data that gives me the below graph, however, if the data stops coming in I want to see "black steps" to the rights, to show the use there is no more data. So ideally I want a time chart, as this will happen naturally do this, but not sure how to get this from the query that I have. Another option is to keep time filling till now() and then put in a fill null for all the values to = 0.   | mstats max("mx.process.cpu.utilization") as cpuPerc max("mx.process.threads") as nbOfThreads max("mx.process.memory.usage") as memoryCons max("mx.process.file_descriptors") as nbOfOpenFiles avg("mx.process.up.time") as upTime avg("mx.process.creation.time") as creationTime WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=1000s BY pid | foreach * [ eval <<FIELD>>=if(<<FIELD>> > .0000001, 1 ,0)]     Any help would be great thanks
Hi All, I am trying to create a dashboard in trellis view. I created the below query for my search: index=abcd host="mwgcb-ckbla02U*" source="/logs/confluent/kafkaLogs/server.log" | rex field=_raw ... See more...
Hi All, I am trying to create a dashboard in trellis view. I created the below query for my search: index=abcd host="mwgcb-ckbla02U*" source="/logs/confluent/kafkaLogs/server.log" | rex field=_raw "(?ms)]\s(?P<Code>\w+)\s\[" | search Code="WARN" | rex field=_raw "^(?:[^ \n]* ){3}\[(?P<code_id>[^\]]+)" | search code_id="AdminClient clientId=adminclient-*" | stats count | eval mwgcb-ckbla02u=if(count=0, "Running", "Down") | table mwgcb-ckbla02u Here, I am using the trellis view and "single value" in visualization. All came up perfect, but I am not able to change the colour of the trellis box. Like when its "Running", box should be green and when "Down", it should be red. Can anyone please help on this..?   Thanks.
I have a network with several security zones, each with their own domain, and each with their own Heavy Forwarder. (e.g. Domain XYZ) There are very restrictive firewalls between these zones. All eve... See more...
I have a network with several security zones, each with their own domain, and each with their own Heavy Forwarder. (e.g. Domain XYZ) There are very restrictive firewalls between these zones. All events from all domains end up on my management zone's indexer and accessed by SH on the same management zone. (e.g. domain ABC) Currently, it seems that the "ldapsearch" input cannot be distributed back to each of the Heavy Forwarders on domain XYZ and is only executed on my management domain ABC. According to company policy and network design principles, the management zone splunk instances cannot be allow to perform LDAP queries directly to other domains (XYZ) which are considered more secure. How can I either instruct the Heavy Forwarders on other domains to execute the ldapsearch on my behalf, or perform these searches on schedule for me to get the information into the index?     Is it possible to instruct the Heavy Forwarder of domain XYZ to perform the ldapsearch on behalf of the search head in domain ABC?
Hello,   I added some Javascript functionality to my multi selections (see: https://www.advisori.de/splunk-struggles-with-multiselects-and-how-to-rule-them-all-or-at-least-some/). The script works... See more...
Hello,   I added some Javascript functionality to my multi selections (see: https://www.advisori.de/splunk-struggles-with-multiselects-and-how-to-rule-them-all-or-at-least-some/). The script works totally fine and works for my dashboard, until I edit the XML.  If I click edit, change some code and click either save or discard, the multi selections behave as if the script would not exist. That means the automatic option removal is not working at all anymore.  Clearing the cache locally in my browser or using the _bump endpoint solves the problem and makes the dashboard and the script work fine again -  just until the XML is edited again. Is there a better (permanent) solution except for clearing the cache after each edit? Thanks in advance!
By using below Query it's working for to find out the only one windows server but can you please post  by using lookup containing all the hosts to Monitor.   index=<your_index> source=WinEventLog* ... See more...
By using below Query it's working for to find out the only one windows server but can you please post  by using lookup containing all the hosts to Monitor.   index=<your_index> source=WinEventLog* EventCode=41 OR EventCode=1074 OR EventCode=6006 OR EventCode=6008  | stats count by host  | where count > 1
Hi Team , I would like to monitor the Linux machines up time and down time/ a alert needs to triggered when a server rebooted or shutdown in Splunk please suggest which solution is best
Hello, I have a search where I need to combine two inputlookups to find incommon values in a field they both have.  The inputlookups are quite big so my current search with JOIN or Search NOT are n... See more...
Hello, I have a search where I need to combine two inputlookups to find incommon values in a field they both have.  The inputlookups are quite big so my current search with JOIN or Search NOT are not working most of the time since they result in a timeout.  Is there a better way to find incommon values, without join or search not?  My current search with join looks like this: | inputlookup table1 | join type=left "ip" [| inputlookup table2 | mvexpand ip | eval xy="xy" | table ip xy] | where isnull(xy) | table ip I've tried another search with NOT but it's working even worse: | inputlookup table1 | search NOT ([| inputlookup table2 | return 10000 ip]) As I said,  both searches result in a timeout. I've been stuck with this problem for hours, so any help would be highly appreciated! 
Hi all,   we have just installed Wazuh app on Splunk. We see booth wazuh and splunk active but the forwarder only sends datas when restarted and after few seconds stops to send them. Does anyone ... See more...
Hi all,   we have just installed Wazuh app on Splunk. We see booth wazuh and splunk active but the forwarder only sends datas when restarted and after few seconds stops to send them. Does anyone knows how to solve this issue? Ty in advance, J
Whenever I've created eval fields before in a data model they're just a single command. Is it possible to do a multiline eval command for a field? This is what I want to make into a single field: | ... See more...
Whenever I've created eval fields before in a data model they're just a single command. Is it possible to do a multiline eval command for a field? This is what I want to make into a single field: | eval AEST_time=_time+36000 | convert timeformat="%Y-%m-%dT%H:%M:%S.%3Q %Z" ctime(AEST_time) | eval epoch=strptime(AEST_time, "%Y-%m-%dT%H:%M:%S.%3Q %Z") | eval date=strftime(epoch, "%Y-%m-%d")
Hi. I need to extract container timeline events via the REST API in order to generate analyst, playbook and action timeline reports. The closest endpoint I can find is briefly mentioned in the REST... See more...
Hi. I need to extract container timeline events via the REST API in order to generate analyst, playbook and action timeline reports. The closest endpoint I can find is briefly mentioned in the REST API documentation: /rest/container/<container id>/actions I can't find any other mention of this endpoint in the documentation. This endpoint is useful however it only provides action history, not analyst or playbook activity history. The Phantom web portal calls an undocumented API which returns exactly what I need: /rest/container/<container ID>/timeline?<many required query parameters> ...however it requires many query parameters. If you don't get the query parameters correct it returns empty results. My questions: 1. Can someone refer me to documentation for the container timeline API endpoint mentioned above? 2. If not, is there an alternative "documented" endpoint that will return all container timeline information? Thanks
We are planning upgrade our clustered deployment from 8.0.5 to 8.2.1 In case we plan to migrate the KVStore engine to WiredTiger, do we have to mandatorily go through the v8.1 step as mentioned in h... See more...
We are planning upgrade our clustered deployment from 8.0.5 to 8.2.1 In case we plan to migrate the KVStore engine to WiredTiger, do we have to mandatorily go through the v8.1 step as mentioned in https://docs.splunk.com/Documentation/Splunk/8.2.1/Admin/MigrateKVstore ?  Per my understanding, the steps for this method stands like this Upgrade cluster from 8.0.5 to 8.1.5  Migrate the KV store after an upgrade to Splunk Enterprise 8.1 in a clustered deployment Upgrade cluster from 8.1.5 to 8.2.1 In case we decide not to use WiredTiger for now and complete the upgrade to 8.2.1, can we migrate the KV  Store to WiredTiger at a later point of time?  To elaborate, will the following work:  Upgrade cluster from 8.0.5 to 8.2.1  At a leter time, if WiredTiger is needed, migrate the KV Store as instructed in Migrate the KV store after an upgrade to Splunk Enterprise 8.1 in a clustered deployment We do not use KV Store at all now, apart from whatever internal functions that Splunk Enterprise uses it for (no ITSI or ES as well), but want to plan ahead in case we use it in the future.  Thanks in advance!
  Hello, My client requests to ingest G Suite logs, when searching I see several APPs which do not have Splunk support. Question 1: What should I understand when an APP does not have Splunk suppo... See more...
  Hello, My client requests to ingest G Suite logs, when searching I see several APPs which do not have Splunk support. Question 1: What should I understand when an APP does not have Splunk support? Question 2: What is the best way to ingest the GSuite logs?   https://splunkbase.splunk.com/app/3793/ or https://splunkbase.splunk.com/app/4560/ or https://splunkbase.splunk.com/apps/#/search/G%20Suite/  
I'm trying to count of the number of occurrences / frequency /variations of arguments appearing for a bat file. For example, GradeReport.bat has this template: GradeReport.bat "grade criteria" "star... See more...
I'm trying to count of the number of occurrences / frequency /variations of arguments appearing for a bat file. For example, GradeReport.bat has this template: GradeReport.bat "grade criteria" "start date" "end date" course# CSV_format Example: 1) GradeReport.bat "Best grade" "07/20/2021" "07/27/2021" 135629 CSV 2) GradeReport.bat "Average grade" "" "07/27/2021" "" CSV 3) GradeReport.bat "Best grade" "" "" 225386 CSV 4) GradeReport.bat "Student grade" "07/16/2021" "" "" CSV   The query would return: First argument count: 4 Second argument count: 2 Third argument count: 2 "Best grade" = 2 "Average grade" = 1 "Student grade" = 1 etc. Thanks for the help.