All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have below output with my search,  base search| stats count by User, action User action count Alex install 3 Alex uninstall 5   But i would like the table to display as b... See more...
Hi, I have below output with my search,  base search| stats count by User, action User action count Alex install 3 Alex uninstall 5   But i would like the table to display as below User install uninstall Total Alex 3 5 8   Please Let me know if this is possible
Splunk Imap is indexing multiple emails with different subjects and source senders but same timestamp into one event.  What should I check?  At a total loss.
Hello fellow Splunkers, So my team has recently implemented the MLTK to track outliers and deviations in network events across several devices. Although I didn't set up the MLTK myself, it is runnin... See more...
Hello fellow Splunkers, So my team has recently implemented the MLTK to track outliers and deviations in network events across several devices. Although I didn't set up the MLTK myself, it is running a query over 5 min intervals to allow analysts to quickly scope deviations from the baseline (upperbound, etc).  All of this is completely fine, however, when we invoke a Notable Event in ES we are left with 24 iterations of the Notable Event (The MLTK requires a 2-hour interval to create a new baseline). Each notable representing a 5 min interval. I was wondering if there is any method to group or cluster these notables into a single Notable Event. We are currently throttling the notable to 1 invocation per hour but this is obviously not a permanent solution as it can cause us to miss alerts that fire within an hour of the previous iteration. Any insight into this would be extremely helpful. Thanks!
Hello Friends  I am facing issues that may be caused by input.conf but I am not able to get bottom of the problem when I search index=network "*f5*" in which 2 types of sourcetype are coming  f5:... See more...
Hello Friends  I am facing issues that may be caused by input.conf but I am not able to get bottom of the problem when I search index=network "*f5*" in which 2 types of sourcetype are coming  f5:bigip:syslog f5:bigip:asm:syslog and certain event count against each sourcetype when search index=network and sourcetype="*f5*",  then under index=network f5:bigip:ltm:tcl:error f5:bigip:syslog and certain event count against each sourcetype but when I put index=network sourcetype="f5:bigip:asm:syslog" no events is found I am collecting the data from f5 on a server (HF) and collecting the data on indexer from HF. Add on are installed but I am not sure whether its configured. Input.conf contains entry for f5:bigip:syslog can someone please help Regards Gaurav @prakash007  @renjith_nair  @jbsplunk 
Would someone be able to help me understand how do to this?  I would like to modify the built in dashboard in the InfoSec APP to exclude a specific source IP address.  The default search the dashboar... See more...
Would someone be able to help me understand how do to this?  I would like to modify the built in dashboard in the InfoSec APP to exclude a specific source IP address.  The default search the dashboard uses is below.   | tstats summariesonly=true allow_old_summaries=true count from datamodel=Intrusion_Detection.IDS_Attacks where * IDS_Attacks.severity="*" by IDS_Attacks.signature, IDS_Attacks.severity | rename "IDS_Attacks.*" as "*" | sort severity   Currently, that dashboard visual is full of events from my vulnerability scanner running scans. 
Every month when software updates go out, my Enterprise deployment exceeds the license. I get overloaded with Event Code 4663. After the first time, I just added it to the blacklist in inputs.conf an... See more...
Every month when software updates go out, my Enterprise deployment exceeds the license. I get overloaded with Event Code 4663. After the first time, I just added it to the blacklist in inputs.conf and problem solved. I'd like to leave that EventCode active and only disable it when the majority of systems are updating. I know I can do this manually but am trying to find if there is a way to automatically enable the blacklist based on date? Or to set a trigger based on a specific series of event codes that indicate software updates? If anyone has tried this before I'm very curious if there's a solution?
So, here is an issue where I can't find some services (e.g, service x, service y. service z) under the field service_name in splunk itsi_summary index but the corresponding service_ids are there in i... See more...
So, here is an issue where I can't find some services (e.g, service x, service y. service z) under the field service_name in splunk itsi_summary index but the corresponding service_ids are there in itsi_summary index. However, when I am looking for those services in the lookup service_kpi_lookup I do find them under title field.  When I do a simple search -  index=itsi_summary | stats count serviceid - I am getting a count of 1029, but then again when I do -  index=itsi_summary | stats count by service_name - I am getting a count of 1024, furthermore if I do - | inputlookup service_kpi_lookup | stats count by title - I am getting a count of 1029 So, there seems to be something broken that populates the service_name field in itsi_summary. Can anyone help me on this. Need to understand on - how this service_name field is getting populated.
Hello Im working on a new script to install Splunk via bash. before accepting the license and starting Splunk, with no prompt and answering yes, Im creating the user-seed.conf file in system/local ... See more...
Hello Im working on a new script to install Splunk via bash. before accepting the license and starting Splunk, with no prompt and answering yes, Im creating the user-seed.conf file in system/local   #create admin account cd /opt/splunk/etc/system/local/ touch user-seed.conf echo "[user_info]" >> user-seed.conf echo "USERNAME = admin" >> user-seed.conf echo "HASHED_PASSWORD = <hased pass>" >> user-seed.conf   However after    '/opt/splunk/bin/splunk start --accept-license --answer-yes --no-prompt'   and going back and trying to find user-seed.conf it no longer exists. Im also removing any file etc/passwd before starting. When Splunk starts with the hashed pass in user-seed.conf does that file disappear or get moved? Maybe Im going about this the wrong way? Better way to do this? Thanks for the thoughts! Todd
Hello All. I am trying to use a lookup to perform a tstats search against a data model, where I want multiple search terms for the same field.  However, I cannot get this to work as desired.  I have... See more...
Hello All. I am trying to use a lookup to perform a tstats search against a data model, where I want multiple search terms for the same field.  However, I cannot get this to work as desired.  I have an example below to show what is happening, and what I'm trying to achieve. I have a lookup file named search_terms.csv: process_exec process process someexe.exe *param1* *param2*   Given this lookup file, here is the expanded search I am trying to achieve:   | tstats summariesonly=false allow_old_summaries=true count from datamodel=Endpoint.Processes where (Processes.process_exec=someexe.exe AND Processes.process=*param1* AND Processes.process=*param2*) by Processes.dest   Here is the first search I tried:   | tstats summariesonly=false allow_old_summaries=true count from datamodel=Endpoint.Processes where [ | inputlookup search_terms.csv | fields process_exec process | rename process_exec AS Processes.process_exec | rename process AS Processes.process ] by Processes.dest   However, expanding this search leads to the second process column being ignored:   | tstats summariesonly=false allow_old_summaries=true count from datamodel=Endpoint.Processes where (Processes.process_exec=someexe.exe AND Processes.process=*param1*) by Processes.dest   Since this did not work, I tried editing the lookup file to look like this: process_exec process someexe.exe *param1*|||*param2*   Then I used makemv to make the process field multivalue:   | tstats summariesonly=false allow_old_summaries=true count from datamodel=Endpoint.Processes where [ | inputlookup search_terms.csv | fields process_exec process | makemv delim="|||" process | rename process_exec AS Processes.process_exec | rename process AS Processes.process ] by Processes.dest   However, this search expanded lead to an "OR" being used for the 2 process query values, instead of an "AND":   | tstats summariesonly=false allow_old_summaries=true count from datamodel=Endpoint.Processes where (Processes.process_exec=someexe.exe AND (Processes.process=*param1* OR Processes.process=*param2*)) by Processes.dest   Does anyone know of a method to create a search using a lookup that would lead to my desired search of:   | tstats summariesonly=false allow_old_summaries=true count from datamodel=Endpoint.Processes where (Processes.process_exec=someexe.exe AND Processes.process=*param1* AND Processes.process=*param2*) by Processes.dest        
Hi. I have a problem with strptime I try converter a date with datee1=strptime('datee', "%d-%b-%y") but with some dates don´t work example datee                                   datee1 31-ag... See more...
Hi. I have a problem with strptime I try converter a date with datee1=strptime('datee', "%d-%b-%y") but with some dates don´t work example datee                                   datee1 31-ago-16                       13-feb-19                        1550026800.000000 When I overwrite  31-ago-16 with 13-fe-19  it works! I don´t understand My source is a Lookup file
  @luke_monahan  Followed you Prometheus app setup. On Heavy forwarder its up and listening on 8098 on Prometheus side seems ok to However in Prometheus Im been thrown errors on "Unsupported Med... See more...
  @luke_monahan  Followed you Prometheus app setup. On Heavy forwarder its up and listening on 8098 on Prometheus side seems ok to However in Prometheus Im been thrown errors on "Unsupported Media Type"   msg="non-recoverable error" count=9 exemplarCount=0 err="server returned HTTP status 415 Unsupported Media Type: <!doctype html><html><head><meta http-equiv=\"content-type\" content=\"text/html; charset=UTF-8\"><title>415 Unsupported Media Type</title></head><body><h1>Unsupported Media Type</h1><p>The requested URL does not support the media type sent.</p></body></html>"   IN prometheus.yml in remote_writes   - url: "https://myhost.mydomain.com:8089" authorization: credentials: "ABC123" tls_config: insecure_skip_verify: true write_relabel_configs: - source_labels: [__name__] regex: expensive.* action: drop    in Splunk HF:   [prometheusrw] port = 8098 maxClients = 10 disabled = 0 [prometheusrw://http_status] bearerToken = ABC123 index = prometheus whitelist = * sourcetype = prometheus:metric disabled = 0     Help Appreciated  
Hi! I'm having a real issue trying to get eventgen working. I'm trying to use the outputMode = s2s but it is bombing out with the below.     2021-07-28 15:06:42 eventgen ERROR MainProc... See more...
Hi! I'm having a real issue trying to get eventgen working. I'm trying to use the outputMode = s2s but it is bombing out with the below.     2021-07-28 15:06:42 eventgen ERROR MainProcess 'utf-8' codec can't decode byte 0xb3 in position 3: invalid start byte Traceback (most recent call last): File "/usr/lib/python3.7/site-packages/splunk_eventgen/eventgen_core.py", line 304, in _worker_do_work item.run() File "/usr/lib/python3.7/site-packages/splunk_eventgen/lib/outputplugin.py", line 39, in run self.flush(self.events) File "/usr/lib/python3.7/site-packages/splunk_eventgen/lib/plugins/output/s2s.py", line 204, in flush m["_time"], File "/usr/lib/python3.7/site-packages/splunk_eventgen/lib/plugins/output/s2s.py", line 173, in send_event e = self._encode_event(index, host, source, sourcetype, _raw, _time) File "/usr/lib/python3.7/site-packages/splunk_eventgen/lib/plugins/output/s2s.py", line 124, in _encode_event encoded_raw = self._encode_key_value("_raw", _raw) File "/usr/lib/python3.7/site-packages/splunk_eventgen/lib/plugins/output/s2s.py", line 78, in _encode_key_value return "%s%s" % (self._encode_string(key), self._encode_string(value)) File "/usr/lib/python3.7/site-packages/splunk_eventgen/lib/plugins/output/s2s.py", line 69, in _encode_string "utf-8" UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb3 in position 3: invalid start byte       My eventgen.conf file looks like this:     [cisco_asa.sample] mode = replay count = -1 timeMultiple = 1 sampletype = raw # outputMode = tcpout outputMode = s2s splunkHost = splunk_search splunkPort = 9997 source = udp:514 host = boundary-fw1 index = main sourcetype = cisco:asa # tcpDestinationHost = splunk_uf1 # tcpDestinationPort = 3333 token.0.token = \w{3} \d{2} \d{2}:\d{2}:\d{2} token.0.replacementType = replaytimestamp token.0.replacement = %b %d %H:%M:%S       It works fine with tcpout (the commented out bits above) but not as s2s.  I'm executing eventgen like this /usr/bin/python3.7 /usr/bin/splunk_eventgen -v generate /opt/splunk-eventgen/default/eventgen.conf The reason I'm using s2s is I'd like to generate sample data as if it's coming from many hosts, sources and sourcetypes and I can't do that if I'm using tcpout. In the above config, splunk_search is a standalone test splunk install. Sending directly to this splunk host via s2s fails. If I switch back to tcpout, then I'm sending to a Splunk UF with a tcpinput configured which is then sending to splunk_search via tcp/9997 eventgen was installed and configured as per http://splunk.github.io/eventgen/SETUP.html#install Any suggestions?
I am not able to connect to splunk web console using python sdk getting timedout error , I am suspecting port is not allowing, Could you please help my code -------------------------- import s... See more...
I am not able to connect to splunk web console using python sdk getting timedout error , I am suspecting port is not allowing, Could you please help my code -------------------------- import sys import getpass sys.path.append('splunk-sdk-python-1.6.16') import splunklib.client as client   def setServer(hostname, splunkuser, splunkpassword):     HOST = hostname     PORT = 8089     USERNAME = splunkuser     PASSWORD = splunkpassword     service = client.connect(host=HOST,port=PORT,username=USERNAME,password=PASSWORD)       #for app in service.apps:      #   print (app.name) # Get the collection of users     #users = service.users      #    print(users)   if __name__ == "__main__":     hostname = input("Enter Splunk Hostname/IP: ")     splunkUser = input("Enter Splunk Admin Username: ")     splunkPassword = getpass.getpass("Enter Splunk Admin Password: ")     setServer(hostname, splunkUser, splunkPassword) ----------------------------------------------------------------- output ----------------------- #python3 test66.py Enter Splunk Hostname/IP: prd-p-pivip.splunkcloud.com Enter Splunk Admin Username: sc_admin Enter Splunk Admin Password: Traceback (most recent call last):   File "/Users/sasikanth.bontha/test66.py", line 23, in <module>     setServer(hostname, splunkUser, splunkPassword)   File "/Users/sasikanth.bontha/test66.py", line 11, in setServer     service = client.connect(host=HOST,port=PORT,username=USERNAME,password=PASSWORD)   File "/usr/local/lib/python3.9/site-packages/splunklib/client.py", line 331, in connect     s.login()   File "/usr/local/lib/python3.9/site-packages/splunklib/binding.py", line 883, in login     response = self.http.post(   File "/usr/local/lib/python3.9/site-packages/splunklib/binding.py", line 1242, in post     return self.request(url, message)   File "/usr/local/lib/python3.9/site-packages/splunklib/binding.py", line 1259, in request     response = self.handler(url, message, **kwargs)   File "/usr/local/lib/python3.9/site-packages/splunklib/binding.py", line 1399, in request     connection.request(method, path, body, head)   File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1257, in request     self._send_request(method, url, body, headers, encode_chunked)   File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1303, in _send_request     self.endheaders(body, encode_chunked=encode_chunked)   File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1252, in endheaders     self._send_output(message_body, encode_chunked=encode_chunked)   File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1012, in _send_output     self.send(msg)   File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 952, in send     self.connect()   File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1419, in connect     super().connect()   File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 923, in connect     self.sock = self._create_connection(   File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/socket.py", line 843, in create_connection     raise err   File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/socket.py", line 831, in create_connection     sock.connect(sa) TimeoutError: [Errno 60] Operation timed out
Hello, I have created this search filter: index=reg host=mp1 "export_successful" | TIMECHART count by "import_successful". Out of this I have created a column chart for visualization - see below. ... See more...
Hello, I have created this search filter: index=reg host=mp1 "export_successful" | TIMECHART count by "import_successful". Out of this I have created a column chart for visualization - see below. A the moment it is visualized, if there is every day an successful export (every day there is just one), but I would like to see also, if the export was not successful. What is the easiest way to do it? Thank you, ava
I have an query that index ="main" |stats count by Text |sort -count | table count Text results: count Text 10 b'dog fish 20   dog cat   How can I drop " b' " prefix from the begi... See more...
I have an query that index ="main" |stats count by Text |sort -count | table count Text results: count Text 10 b'dog fish 20   dog cat   How can I drop " b' " prefix from the beginning of results (only from beginning , not replace into all string)
I am trying to run the following tstats search on indexer cluster, recently updated to splunk 8.2.1:     | tstats count where index=_internal by host       The search returns no results... See more...
I am trying to run the following tstats search on indexer cluster, recently updated to splunk 8.2.1:     | tstats count where index=_internal by host       The search returns no results, I suspect that the reason is this message in search log of the indexer:     Mixed mode is disabled, skipping search for bucket with no TSIDX data: \opt\splunkhot\_internaldb\db\hot_v1_4334       When I check the specified bucket folder, I can see the tsidx files inside.  Interesting fact is, that this issue occurs only with _internal index, same command works fine with other indexes. I have datamodel "Splunk's Internal Server Logs" enabled and accelerated. Any suggestions where to start troubleshooting this issue?
I have an query that index ="main" |stats count by Text |sort -count | table count Text results: count Text 10 dog fish 20    dog cat            How can I change the compare that... See more...
I have an query that index ="main" |stats count by Text |sort -count | table count Text results: count Text 10 dog fish 20    dog cat            How can I change the compare that compare first X chars into Text , for example first 4 chars , so "dog fish" and "dog cat" will be 1 line?   count Text 30 dog .....    
Hi  I have the following data that gives me the below graph, however, if the data stops coming in I want to see "black steps" to the rights, to show the use there is no more data. So ideally I want... See more...
Hi  I have the following data that gives me the below graph, however, if the data stops coming in I want to see "black steps" to the rights, to show the use there is no more data. So ideally I want a time chart, as this will happen naturally do this, but not sure how to get this from the query that I have. Another option is to keep time filling till now() and then put in a fill null for all the values to = 0.   | mstats max("mx.process.cpu.utilization") as cpuPerc max("mx.process.threads") as nbOfThreads max("mx.process.memory.usage") as memoryCons max("mx.process.file_descriptors") as nbOfOpenFiles avg("mx.process.up.time") as upTime avg("mx.process.creation.time") as creationTime WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=1000s BY pid | foreach * [ eval <<FIELD>>=if(<<FIELD>> > .0000001, 1 ,0)]     Any help would be great thanks
Hi All, I am trying to create a dashboard in trellis view. I created the below query for my search: index=abcd host="mwgcb-ckbla02U*" source="/logs/confluent/kafkaLogs/server.log" | rex field=_raw ... See more...
Hi All, I am trying to create a dashboard in trellis view. I created the below query for my search: index=abcd host="mwgcb-ckbla02U*" source="/logs/confluent/kafkaLogs/server.log" | rex field=_raw "(?ms)]\s(?P<Code>\w+)\s\[" | search Code="WARN" | rex field=_raw "^(?:[^ \n]* ){3}\[(?P<code_id>[^\]]+)" | search code_id="AdminClient clientId=adminclient-*" | stats count | eval mwgcb-ckbla02u=if(count=0, "Running", "Down") | table mwgcb-ckbla02u Here, I am using the trellis view and "single value" in visualization. All came up perfect, but I am not able to change the colour of the trellis box. Like when its "Running", box should be green and when "Down", it should be red. Can anyone please help on this..?   Thanks.
I have a network with several security zones, each with their own domain, and each with their own Heavy Forwarder. (e.g. Domain XYZ) There are very restrictive firewalls between these zones. All eve... See more...
I have a network with several security zones, each with their own domain, and each with their own Heavy Forwarder. (e.g. Domain XYZ) There are very restrictive firewalls between these zones. All events from all domains end up on my management zone's indexer and accessed by SH on the same management zone. (e.g. domain ABC) Currently, it seems that the "ldapsearch" input cannot be distributed back to each of the Heavy Forwarders on domain XYZ and is only executed on my management domain ABC. According to company policy and network design principles, the management zone splunk instances cannot be allow to perform LDAP queries directly to other domains (XYZ) which are considered more secure. How can I either instruct the Heavy Forwarders on other domains to execute the ldapsearch on my behalf, or perform these searches on schedule for me to get the information into the index?     Is it possible to instruct the Heavy Forwarder of domain XYZ to perform the ldapsearch on behalf of the search head in domain ABC?