All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a large list of data.  I want to only see lines that include certain words.  ie would be: Restart.  I want to see all mins that are spent to restart a product.  I want to create categories for... See more...
I have a large list of data.  I want to only see lines that include certain words.  ie would be: Restart.  I want to see all mins that are spent to restart a product.  I want to create categories for certain words sum the the mins and have it in a pie chart.  So the line item may say...restarted>RESTARTED>re started> etc.  I want to captured the information in one section of the pie.  I can do a google hangout if anyone would like to work with me on this.
Hi, can someone answer the reason for Splunk SmartStore requiring 90days of local storage when using Enterprise Security rather than 30days? Many thanks in advance  
on this page: https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/kvstore/ the link for "Tutorial: Use KV Store with a simple app" is broken. Can someone direct me to a working versio... See more...
on this page: https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/kvstore/ the link for "Tutorial: Use KV Store with a simple app" is broken. Can someone direct me to a working version of this link?   
I would like assistance installing AVAYA call after I download it. Thanks
Hello! I upload a Splunk Enterprise and after create the new user with power and user roles. I want to add data, but I can not do it, cause in the home app nothing like the Add DATA option. Does anyb... See more...
Hello! I upload a Splunk Enterprise and after create the new user with power and user roles. I want to add data, but I can not do it, cause in the home app nothing like the Add DATA option. Does anybody know what to do?
Hi all I need help to configure alert for radius accounting request per second. To find requests per second we did this formula: sourcetype="cisco:bulkstats:up:systemSch10" host=dyu-sae-1-1 | stat... See more...
Hi all I need help to configure alert for radius accounting request per second. To find requests per second we did this formula: sourcetype="cisco:bulkstats:up:systemSch10" host=dyu-sae-1-1 | stats sum(aaa_ttlradacctreq) as req sum(aaa_ttlradacctreqretried) as retr by _time | delta req as rq | delta retr as rt | timechart span=5m per_second(rq) as "requests per second" per_second(rt) as "retries per second" per_second(rq) shows approximately 400 request/s So I want to configure alert if this goes to 600 request/s  Any help appreciated   Many thanks
Hi.. i am new to splunk so i would like to know is there alert for fingerprint login also? please let me know and i would like to know the search comments used in search box
I have get this table as output after my base query: COL1    |    COL2   |    COL3  ..........................So On A,a         |      B,b      |     C,c X,x         |                  |       Y,y... See more...
I have get this table as output after my base query: COL1    |    COL2   |    COL3  ..........................So On A,a         |      B,b      |     C,c X,x         |                  |       Y,y Z,z                                           ================== Here A,a  and X,x and Z,z are in the same row or same cell Output: COL1   |     COL2    |    COL3.........................So On A,a        |       B,b       |     C,c  ================== X,x         | Null,Null  |    Y,y ================== Z,z         | Null,Null  |  Null,Null   Can someone please please help me with this.
Hello, I want to search for all src hosts that connect to a specific destination with or without intermediary hopes. I want to use a recursive query on the core firewall logs and its dest and src fi... See more...
Hello, I want to search for all src hosts that connect to a specific destination with or without intermediary hopes. I want to use a recursive query on the core firewall logs and its dest and src fields to find all sources. Would you please help me with this query?  
Hi, Unable to distribute to peer named **** at uri https://***** because replication was unsuccessful. replicationStatus Failed failure info: failed_because_HTTP_REPLY_READ_FAILURE Please verify con... See more...
Hi, Unable to distribute to peer named **** at uri https://***** because replication was unsuccessful. replicationStatus Failed failure info: failed_because_HTTP_REPLY_READ_FAILURE Please verify connectivity to the search peer, that the search peer is up, and an adequate level of system resources are available.   Why am i getting this error message? And what does this say? This error is not frequent and only appears for a specific alert. How do i fix this?   The alert is supposed to trigger only when there are zero results. But when i run the search manually in search app i see the events presents. But when i see the triggered alerts and inspect the job i see zero events and this error message
Hello, ive created a remote report that runs in GCP that updates every 30 minutes, we are displaying these results on a dashboard in an on premises environment. However only certain users are able to... See more...
Hello, ive created a remote report that runs in GCP that updates every 30 minutes, we are displaying these results on a dashboard in an on premises environment. However only certain users are able to view this data, for admins etc it is fine, however even power users arent able to see the data, we just get a Unable to find object id= xxxxx. Yet if i give the users our local_power which was created to have LESS capabilities than the built in power user it works fine. I spent about 3 hours trying to find out why this could be yesterday but its a mystery. Anyone have an idea whats causing this?
I have a table of users and their position level across an organization. How would i join the table of positions and get their internal & external ids  and fill in the column with bold green fonts?  ... See more...
I have a table of users and their position level across an organization. How would i join the table of positions and get their internal & external ids  and fill in the column with bold green fonts?  Desired tabled: Name  Level Internal ID External ID User 1 Level1 904787 ZZ88985 User 2 Level2  927819 ZZ55135 User 3 Level2  701876 ZZ64157 User 4 Level3 166387 ZZ89635 User 5 Level3 73914 ZZ93585 User 6 Level3 394497 ZZ65026 User 7 Level3 200662 ZZ99972 User 8 Level3 925192 ZZ94890 User 9 Level4 254770 ZZ45273 User 10 Level4 174055 ZZ55961 User 11 Level4 344944 ZZ81383 User 12 Level4 114436 ZZ37757 User 13 Level4 672453 ZZ68642 User 14 Level4 992512 ZZ82497 User 15 Level4 915758 ZZ33143   Based on the two tables below: table of positions.  Level 1 Level2  Level3 Level4 User 1 User 2 User 4 User 9 User 1 User 2 User 4 User 10 User 1 User 2 User 5 User 11 User 1 User 2 User 6 User 12 User 1 User 3 User 6 User 13 User 1 User 3 User 7 User 14 User 1 User 3 User 8 User 15   User ID Detail UserID Internal ID External ID User 1 773236 ZZ60307 User 2 720417 ZZ91613 User 3 327957 ZZ36532 User 4 865654 ZZ28800 User 5 128875 ZZ67338 User 6 858309 ZZ60570 User 7 878572 ZZ56897 User 8 804657 ZZ72104 User 9 90130 ZZ13737 User 10 983968 ZZ68473 User 11 33431 ZZ88498 User 12 205262 ZZ93466 User 13 505492 ZZ45170 User 14 876947 ZZ91395 User 15 229730 ZZ18609
Hi there, I'm having a really hard time creating an alert based of a search that detects the absence of events. I have a list of total customers we monitor contained in a .csv in Splunk called Prov... See more...
Hi there, I'm having a really hard time creating an alert based of a search that detects the absence of events. I have a list of total customers we monitor contained in a .csv in Splunk called Provider_Alert.csv My goal is to create (in SQL terms) a "left" join where my "left" table is all the Providers from Provider_Alert.csv and the second joined table is based off of Splunk logged events (let's call this Search_A), where if there was no match the absence of events would be 0. An additional challenge I'm having is that the mutual field that join Provider_Alert.csv and Search_A is one I have to derive with eval and coalesce statements (let's call this partner_idd)...since it's split in two fields in Search_A. So TL;DR:  I'd like to join: All entries in Provider_Alert.csv JOIN WHATEVER EVENTS ARE AVAILABLE FROM Search_A | eval partner_idd =coalesce(field1, field2) | JOIN ON partner_idd And if there are no results from the JOIN, then it's 0. Also happy to take recommendations, I've spent a whole afternoon on this so I'm desperate and open for any recommendations.
Hello! I am having trouble uploading any files to Splunk using Add Data. This is the whole error message I am getting.  HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with ur... See more...
Hello! I am having trouble uploading any files to Splunk using Add Data. This is the whole error message I am getting.  HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /services/indexing/preview?output_mode=json&amp;amp;props.NO_BINARY_CHECK=1&amp;amp;input.path=access_30DAY.log (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('&amp;lt;urllib3.connection.HTTPSConnection object at 0x0000000004E3AB88&amp;gt;: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))) I have tried reinstalling, restarting computer, setting "Use a proxy server for your LAN" off on my Windows 7 Operating System, turning off windows firewall, clearing all my cookies, and tried uploading a few other files. I entered "netsh winhttp show proxy" into my cmd prompt and received "Current WinHTTP proxy settings: Direct access <no proxy server>." I also entered netstat -an | find "8089" and the response is " TCP 0.0.0.0:8089, 0.0.0.0:0 LISTENING". I don't know if any of this is helpful, but if anyone is able to help I would greatly appreciate it. To the best of my knowledge, I'm not using a proxy server, but I don't really have any expertise in this area. I just want to upload a log file to Splunk.  
I have a user that kicks off long-running queries and complains that he gets job failures. "DAG Execution Exception: Search has been cancelled" "Search auto-canceled" "The search job has failed du... See more...
I have a user that kicks off long-running queries and complains that he gets job failures. "DAG Execution Exception: Search has been cancelled" "Search auto-canceled" "The search job has failed due to an error. You may be able view job in the Job Inspector." When I run the same query as admin, the job takes a while but it will complete without failure, usually around 20 minutes. I check the shc ram usage and that looks fine,  the resources appear fine. IDK if the user is running the search at high usage times, but I am not sure how to t-shoot where the issue is. Any advice is appreciated. Thank you    
hello   How can I get the dirname from the filepath.  I am looking something like this Ankits-MacBook-Pro:~ akotak$ dirname /splunk/is/not/easy/ /splunk/is/not
Hello Splunk Community, I could certainly use you help getting myself out of a rather large jam I'm in. I need guidance on how to properly re-route a subset of events generated by one of our applicat... See more...
Hello Splunk Community, I could certainly use you help getting myself out of a rather large jam I'm in. I need guidance on how to properly re-route a subset of events generated by one of our applications running within a Docker Container before it is indexed by Splunk. Our development team updated the logging mechanism of this application to no longer write this subset of events to a flat log file. The subset  of events is now being written out to STDOUT/STDERR within the container itself. This means the specific events I need to extract are now being lumped into the same index as every other container processes events running on the same Docker Swarm. This is causing a problem for me as I am new to administering Splunk and struggling to understand if it's going to be possible to extract this specific subset of events I need while filtering out the noise of every other container process writing events to the same index.  To get data in from Docker, we are using the Splunk Logging Driver for Docker and have replaced "/etc/docker/daemon.json" config file with the following configuration file on all of our Docker Clusters.       { "metrics-addr": "0.0.0.0:9323", "experimental": true, "log-driver": "splunk", "log-opts": { "splunk-format": "json", "splunk-verify-connection": "false", "splunk-token": "TOKEN", "splunk-url": "URL-TO-SPLUNK", "splunk-insecureskipverify": "true", "tag": "{{.DaemonName}}/{{.Hostname}}/{{.Name}}/{{.ID}}" } }     The HEC Token we have configured for Docker events is as follows     [http://docker] disabled = 0 index = main sourcetype = hec:swarm token = $HEC-Token useACK = 0     Since we use the same HEC Token across all of our Docker Swarms, we have some Props.conf and Transforms.conf stanzas that were configured by an old co-worker to modify the index these events flow to based on source-type.   #props.conf [hec:swarm] SEDCMD-0_hec_tags_rename = s/tag/tags/g TRANSFORMS-0_hec_indexer = docker_change_index TRANSFORMS-1_docker_sourcetyper = docker_sourcetyper TRANSFORMS-9_clean_fields = clean_docker_sourcetypes KV_MODE = json ANNOTATE_PUNCT = false [(::){0}json:platform:*] ANNOTATE_PUNCT = false KV_MODE = json REPORT-vp_app_extract = vp_app_extract REPORT-docker_msghdr_extract = docker_msghdr_extract REPORT-docker_auth_events_extract = docker_auth_events_extract REPORT-docker_kv_extract = inline_kv_extract EVAL-action = case(event == "Authentication failed", "failure", event == "Authentication success", "success", true(), null())       #Transforms.conf # Extract Message Header [docker_msghdr_extract] REGEX = \,\d{3} [^A-Z]+(?<level>\S+)(?:[^\[]+\[){2}\d+m(?<process>[^\\]+) # Extarct Authentication Events [docker_auth_events_extract] REGEX = (?<event>Authentication [^\:]+) # Extract KV pairs [docker_kv_extract] REGEX = (\S+) = [']*([^,"']*) FORMAT = $1::$2 # Extract platform application [vp_app_extract] #REGEX = tags\":\"docker(?:[^_]+)_[^_]+_(?<app>[^\.]+) REGEX = tags\":\"docker\/(?:[^\/]+)\/(?<stack>[a-zA-Z-]+)_(?<app>[^\.]+) ####### SOURCETYPERS ######################### [docker_sourcetyper] DEST_KEY = MetaData:Sourcetype REGEX = tags\":[^\/]+\/(?:[^-]+-){2}([^-]+) FORMAT = sourcetype::json:platform:$1 ####### CHANGE INDEX ############################ [docker_change_index] DEST_KEY = _MetaData:Index REGEX = \"tags\":\"docker/([^-]+)-([^-]+)-([^-]+) FORMAT = $1$2_$3 ####### CLEAN ############################ # Remove '-' from sourcetype [clean_docker_sourcetypes] INGEST_EVAL = sourcetype=replace(sourcetype, "(-|_)", "")     The event subset I'm trying to re-route to a different index contains information in the following format. This is a raw example pulled from Splunk after the event was indexed, and has been sanitized as much as possible.           {"line":"$dockerContainerID,$YYYY-$MM-$DD:$HH:$MM:$SS,$AlphaNumericString,$AlphaString,$AlphaString,$AlphaString,$numericString,$AlphaString,$numericString,,,$DollarAmount,$AlphaString,$AlphaString,$AlphaString,$FloatingPointString,$FloatingPointString,$NumericString,$NumericSring","source":"stderr","tags":"docker/$DockerNodeHostName/$DockerStack_$StackApplication.1.0fj4pexdb3m16giqp1atrfco5/47fb3b6218d5"}     Based on our current configuration could anyone lend a guiding hand on my best path forward to extract this subset of events out and redirect them to their own index, if what I'm attempting is at all possible. I fear that it is not because every container process is writing to the same index, has the same, host, source, and source-type. Based on my understanding of how Props.conf stanzas are defined I'm not sure that I can define any rules that won't effect every event.   [<spec>] * This stanza enables properties for a given <spec>. * A props.conf file can contain multiple stanzas for any number of different <spec>. * Follow this stanza name with any number of the following setting/value pairs, as appropriate for what you want to do. * If you do not set a setting for a given <spec>, the default is used. <spec> can be: 1. <sourcetype>, the source type of an event. 2. host::<host>, where <host> is the host, or host-matching pattern, for an event. 3. source::<source>, where <source> is the source, or source-matching pattern, for an event. 4. rule::<rulename>, where <rulename> is a unique name of a source type classification rule. 5. delayedrule::<rulename>, where <rulename> is a unique name of a delayed source type classification rule. These are only considered as a last resort before generating a new source type based on the source seen.  
The Splunk built Add-on for JIRA app has not been touched since 2017 and is unfortunately not compatible with python 3 syntax and libraries.  We use this app in our organization for some of our workf... See more...
The Splunk built Add-on for JIRA app has not been touched since 2017 and is unfortunately not compatible with python 3 syntax and libraries.  We use this app in our organization for some of our workflows to monitor relationships between certain stories and application log events and have recently upgraded to version 8.  It would be nice if Splunk would let us know if this app is going to continue to be maintained or if there is a recommended alternative that has similar jql querying functionality. For anyone else in this boat we did find a work around that allows for the app to continue to function while on version 8 by creating a commands.conf entry in the /etc/apps/jira/local folder with the below entries:   [jira] python.version = python2 [jirarest] python.version = python2   The problem with this solution is once Splunk decides that they no longer want to offer a python2 interpreter you are stuck coming up with another solution.
Hello, I recently faced an issue when populating a summary index. I scheduled a saved search to run every hour (with the last 60 minutes time range) and populate a summary index. The search takes ar... See more...
Hello, I recently faced an issue when populating a summary index. I scheduled a saved search to run every hour (with the last 60 minutes time range) and populate a summary index. The search takes around 5 minutes every time to be completed. My problem is that every time this scheduled search runs to populate the index, events in the last 30 seconds of the time range will be discarded from the results by Splunk. For example, for a one-hour time range like 9:00:00 to 10:00:00, the index is only populated with the events from 9:00:00 to 9:59:30. This issue caused some gaps and discrepancies in our index data.  Is there any way to solve this? I searched a lot but couldn't find any answer Thanks.
Hello all! I have alphanumeric timestamps that I'd like to convert to seconds. I'm trying to convert these two timestamps to seconds and then subtract one from the other to find the total duration... See more...
Hello all! I have alphanumeric timestamps that I'd like to convert to seconds. I'm trying to convert these two timestamps to seconds and then subtract one from the other to find the total duration of a phone call.    Wed Mar 03 13:38:36 PST 2021 Wed Mar 03 13:29:29 PST 2021 Could someone please point me in the right direction? Much appreciated, this has got me at my wit's end!