All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have installed Splunk Add on for AWS. I have configured my AWS account and input. I get metrics from everything required from CloudWatch eg, RDS, ElastiCache, EC2 ect ... Except ECS. Does anyone kn... See more...
I have installed Splunk Add on for AWS. I have configured my AWS account and input. I get metrics from everything required from CloudWatch eg, RDS, ElastiCache, EC2 ect ... Except ECS. Does anyone know anything about this?
Hi I would like to open a popup " please fait à few seconds" when i open my dashboard How to do this please?
Dear Experts I am using sendalert command to invoke a custom alert action. It currently only triggers once irrespective of no of results. Is it possible to trigger it for each result. 
Hello,  I have the following situation - in the original files I have the following information in the field: ServerName1 ServerName10 I need to replace the values in the output to be something l... See more...
Hello,  I have the following situation - in the original files I have the following information in the field: ServerName1 ServerName10 I need to replace the values in the output to be something like: srv01 srv10 while I know that the following command does replace the ServerName to srv, I cannot seem how to add 0 digit before the numbers that are having only one digit: | rex field=AIS_ServerHost mode=sed "s/ServerName/srv/g" The output will be: srv1 srv10 A bit of help, please? Thank you!
Hi Guys   I have 7.3.4v Spunk and am trying to setup Splunk Add-on for ServiceNow, But the inputs page on the UI is not loading up to configure any inputs - FYI (tried to create /local/inputs... See more...
Hi Guys   I have 7.3.4v Spunk and am trying to setup Splunk Add-on for ServiceNow, But the inputs page on the UI is not loading up to configure any inputs - FYI (tried to create /local/inputs.conf) didnt work.   Did anyone face the same issue  ?   Cheers
Splunk version :7.3.3 We are testing the Custom alert action. We copied the files as alert_test from etc/apps/alert_logevent. Then we used the example from https://docs.splunk.com/Documentation/Spl... See more...
Splunk version :7.3.3 We are testing the Custom alert action. We copied the files as alert_test from etc/apps/alert_logevent. Then we used the example from https://docs.splunk.com/Documentation/Splunk/7.3.3/AdvancedDev/ModAlertsBasicExample    and configurated the  alert_actions.conf and the logger.py  . We set an alert and add the custom alert to the alert . And the alert runs every 2 minutes. The logger example implements a custom alert action that does the following: Creates a path to a log file when the alert first fires. Writes log messages to the log file when the alert fires. Writes log information to an existing Splunk Enterprise log file BUT when we cat the log ,we found that the message as below the : 2021-02-05T11:08:01.473866 got arguments ['/data/eccom_gao/splunk/etc/apps/alert_log_test/bin/logger.py', '--execute'] 2021-02-05T11:08:01.474097 got payload: {"app":"search","owner":"admin","result_id":"0","results_file":"/data/eccom_gao/splunk/var/run/splunk/dispatch/scheduler__admin__search__RMD5e0e0606133e59cd5_at_1612494480_94/per_result_alert/tmp_0.csv.gz","results_link":"http://056-gj-test01:8000/app/search/search?q=%7Cloadjob%20scheduler__admin__search__RMD5e0e0606133e59cd5_at_1612494480_94%20%7C%20head%201%20%7C%20tail%201&earliest=0&latest=now",.................................} 2021-02-05T11:08:01.615030 got arguments ['/data/eccom_gao/splunk/etc/apps/alert_log_test/bin/logger.py', '--execute'] 2021-02-05T11:08:01.615210 got payload: {"app":"search","owner":"admin","result_id":"1","results_file":"/data/eccom_gao/splunk/var/run/splunk/dispatch/scheduler__admin__search__RMD5e0e0606133e59cd5_at_1612494480_94/per_result_alert/tmp_1.csv.gz","results_link":"http://056-gj-test01:8000/app/search/search?q=%7Cloadjob%20scheduler__admin__search__RMD5e0e0606133e59cd5_at_1612494480_94%20%7C%20head%202%20%7C%20tail%201&earliest=0&latest=now",...........................................................} 2021-02-05T11:13:01.761179 got arguments ['/data/eccom_gao/splunk/etc/apps/alert_log_test/bin/logger.py', '--execute'] 2021-02-05T11:13:01.761385 got payload: {"app":"search","owner":"admin","result_id":"0","results_file":"/data/eccom_gao/splunk/var/run/splunk/dispatch/scheduler__admin__search__RMD5e0e0606133e59cd5_at_1612494480_94/per_result_alert/tmp_2.csv.gz","results_link":"http://056-gj-test01:8000/app/search/search?q=%7Cloadjob%20scheduler__admin__search__RMD5e0e0606133e59cd5_at_1612494480_94%20%7C%20head%203%20%7C%20tail%201&earliest=0&latest=now",...............................................} 2021-02-05T11:13:01.761179 got arguments ['/data/eccom_gao/splunk/etc/apps/alert_log_test/bin/logger.py', '--execute'] 2021-02-05T11:13:01.761385 got payload: {"app":"search","owner":"admin","result_id":"1","results_file":"/data/eccom_gao/splunk/var/run/splunk/dispatch/scheduler__admin__search__RMD5e0e0606133e59cd5_at_1612494480_94/per_result_alert/tmp_2.csv.gz","results_link":"http://056-gj-test01:8000/app/search/search?q=%7Cloadjob%20scheduler__admin__search__RMD5e0e0606133e59cd5_at_1612494480_94%20%7C%20head%203%20%7C%20tail%201&earliest=0&latest=now",...............................................} It seems like :The time stamp in the log is not consistent with the time that the alert runs. The time in the log is not written every two minutes. Sometimes it may take five minutes to write in the log. Can anyone help me, please?    
Hi.  My environment is running splunk stream app. Logs from my windows environment servers are streamed to a  heavy forwarder and then out to splunk cloud. The index it falls under is: index=stream... See more...
Hi.  My environment is running splunk stream app. Logs from my windows environment servers are streamed to a  heavy forwarder and then out to splunk cloud. The index it falls under is: index=stream I am trying to determine if a particular windows server stream data is making it. The streamfwd process on the server is running.  The server is named: server1 At the indexer, I tried running a search of this but nothing returns:  index=stream host=server1 If I run a search like this, I see one HOST and 100+ hostnames in the same event: index=stream hostname{}=server1 Any recommendation?  
Hi Team  I could see my license limit has reached for my syslog-ng. Can you please let me know how can I get a list of all the host which are getting data from syslog via syslog-ng to splunk?  Than... See more...
Hi Team  I could see my license limit has reached for my syslog-ng. Can you please let me know how can I get a list of all the host which are getting data from syslog via syslog-ng to splunk?  Thanks, AG. 
I have an event list in a dashboard so I can use a workflow action. What I found is that it would be easier to have a drill down in the panel to get to the URL the workflow is using. Putting the data... See more...
I have an event list in a dashboard so I can use a workflow action. What I found is that it would be easier to have a drill down in the panel to get to the URL the workflow is using. Putting the data into a table is not ideal based on how everything gets formatted. I wanted to still use the Event panel type; is it possible to append a column between Time and the Event?
Currently going over the Splunk App for Windows Infrastructure and found a saved search that updates a lookup table that I mostly understand, but there is a detail I am very curious about. The stanza... See more...
Currently going over the Splunk App for Windows Infrastructure and found a saved search that updates a lookup table that I mostly understand, but there is a detail I am very curious about. The stanza is:   [WinApp_Lookup_Build_Perfmon - Update - Detail] <field - value pairs> search = `perfmon-index` eventtype="perfmon_windows" object=* \ | eval instance = if(isnull(instance), "NA", instance) \ | stats count by collection, object, counter, instance \ | sort collection, object, counter, instance \ | eval _key = collection . "___" . object . "___" . counter . "___" . instance \ | outputlookup windows_perfmon_details append=true   I understand every line in the search and I understand what happens when you use append=true and how setting a field will ensure that the column with the name of the field is added to the lookup table, what I don't understand is why the specific evaluation of the concatenation of the four columns with three underscores in between. When I try to generate _key in a separate search, it results in an empty column and from my understanding from the outputlookup documentation, the field created is the column to be added.  Any insights on why the specific eval execution?
I'm using the Splunk for Jenkins plugin and my Console Output is only visible for the pipelines that are at the root of Jenkins. For Jobs that are inside folders I can't capture the information insi... See more...
I'm using the Splunk for Jenkins plugin and my Console Output is only visible for the pipelines that are at the root of Jenkins. For Jobs that are inside folders I can't capture the information inside the splunk.
Hello, my raw logs look something like this:  Example 1: 2021-02-03 23:59:07,216 LogLevel=INFO my_appid= intuit_tid=EFEtoPBI805aaa9f-9254-499b-ae80-    2c39ca7b33cd provider_tid=fe62e521-a9c6-4d3a-... See more...
Hello, my raw logs look something like this:  Example 1: 2021-02-03 23:59:07,216 LogLevel=INFO my_appid= intuit_tid=EFEtoPBI805aaa9f-9254-499b-ae80-    2c39ca7b33cd provider_tid=fe62e521-a9c6-4d3a-8c45-a25a10abd5ac     class=com.intuit.fds.provider.dao.impl.EinDAOImpl Get disabled ein by ein=821570477 Example 2: 2021-02-03 23:59:07,216 LogLevel=INFO my_appid= intuit_tid=EFEtoPBI805aaa9f-9254-499b-ae80-2c39ca7b33cd provider_tid=fe62e521-a9c6-4d3a-8c45-a25a10abd5ac class=com.intuit.fds.provider.service.impl.EinServiceImpl Create or update ein=821570477 einVO=EinVO [ein=821570477, active=false, einProviderRelationships=EinProviderRelationshipsVO [einProviderRelationship=[EinProviderRelationshipVO [id=null, active=null, providerId=5ece3c4d-6791-4bed-bbf5-fd9c0736c129, taxYear=2020, serviceName=W2, actualAvailabilityDate=2021-02-03T23:59:07.172-08:00, expectedAvailabilityDate=2021-02-03T23:59:07.172-08:00, preference=1, synced=false]]]] My goal is to create a single field / variable (let's call it action_type) where the value of that field is determined by the presence of the string "Create or update" (action_type=add) or "Get disabled" (action_type=disable).  My struggle is that these strings aren't associated with any fields, so I'm not sure how to have my eval include the LIKE function without defining a field. Please help! My work: [base query] |eval action_type=CASE( LIKE(??, "Get disabled"), "disable", LIKE(??, "Create or update"), "add", 1==1, "null")
What would be a “safe” value for the TRUNCATE option in props.conf? I have some pretty big json events coming via HEC hitting the _json sourcetype (INDEXED_EXTRACTIONS=json).
Hello, Setting DUO connection in Splunk, I keep receiving this error message: Encountered the following error while trying to save: Argument validation for scheme=duo_input failed: The script retur... See more...
Hello, Setting DUO connection in Splunk, I keep receiving this error message: Encountered the following error while trying to save: Argument validation for scheme=duo_input failed: The script returned with exit status 1. I have re-installed the app, created the iAnyone knows how to resolve it? Thank you,
I am trying to put together and average duration (calculated and logged by product) as well as count. however the logs show an "s" or "ms" at the end of the value to reflect how long processing took.... See more...
I am trying to put together and average duration (calculated and logged by product) as well as count. however the logs show an "s" or "ms" at the end of the value to reflect how long processing took. I need to convert the results into an average duration but have been unable to figure it out.  In this example is shows ms, but could be seconds on the next records and the value end in s Example Record:   request.status='completed'; status.message=''; request.start='2021-01-29 15:50:25.402471006 +0000 UTC m=+501.139572300'; request.end='2021-01-29 15:50:26.193830852 +0000 UTC m=+501.930932145'; request.duration='791.359845ms'"   Here is my current query (note requestduration is an extracted field resulting in the value of request.duration as 791.359845ms in this case) :   | stats count(requestduration) as count avg(requestduration) as Average by source   I get that I cannot average the values with the alpha characters in there, but I don't know how to convert the seconds into milis second then remove the character and average them.. Any help would be appreciated!
What are the parameters or rules on URLS for the licensing? I would like to use a URL to our Salesforce page outlining our licensing agreements.
Hello ! I am new in Splunk , i am on the course Fundamentals 1 and i cant find the ADD DATA icon . I have just one account and normally i am the admin . Anyone knows how to fix this problem ?
My deployment server sits behind a load balancer.  What I have noticed is that on the DS under Forwarder Management (Clients tab), all my UFs phoning home now appear with the same IP address (they ha... See more...
My deployment server sits behind a load balancer.  What I have noticed is that on the DS under Forwarder Management (Clients tab), all my UFs phoning home now appear with the same IP address (they have unique client names, host name, instance name). Is there a macro or something on the back end that I can update to display the true IP address of each system phoning home?  The true source IP is showing in in metrics.log so I'd like to modify the existing SPL to use the IP from metrics rather than wherever it's getting it from.
I have an http event collector configured with a heavy forwarder in the DMZ forwarding to an internal Indexer. The timtestamp of all events is being set to the time received, it's not picking up the ... See more...
I have an http event collector configured with a heavy forwarder in the DMZ forwarding to an internal Indexer. The timtestamp of all events is being set to the time received, it's not picking up the "time" value from the body despite my props.conf settings. No errors or warnings in "_internal" around timestamp or anything close to it. Test event sent to the collector: curl --location --request POST 'https://<redacted>.com/services/collector' \ --header 'Authorization: Splunk <redacted>' \ --header 'Content-Type: application/json' \ --data-raw '{"event": {"time":"2021-02-04 20:20:20.123-05:00","userSettings":{"userId":"ab12345","userName":"ab12345,"site":"000"},"version":5070004},"sourcetype": "st-test"}'   shows up as expected in Search results as expected (raw): {"time":"2021-02-04 20:20:20.123-05:00","userSettings":{"userId":"ab12345","userName":"ab12345","site":"901"},"version":5070004}   props.conf for this sourcetype is configured on both the heavy forwarder and internal indexer: [st-test] TRUNCATE = 100000 INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Structured disabled = false pulldown_type = 1 TIME_PREFIX = "time":" TIME_FORMAT = %Y-%m-%d %H:%M:%S.%3N%:z MAX_TIMESTAMP_LOOKAHEAD = 32   Any ideas?
Hello,   I'm relatively new to Splunk. I have multiple fields with different naming schemes that have different  or identical values. Here's an example:   hash=yj843yj387hfhjf723hjf47hnf29nf has... See more...
Hello,   I'm relatively new to Splunk. I have multiple fields with different naming schemes that have different  or identical values. Here's an example:   hash=yj843yj387hfhjf723hjf47hnf29nf hashes=xmv98svmd89djmfv98jvkfj9jm Hashes=n9nuevur9vv9v8fj0fefjeffjv9ejve8 sha1_hash=84hmrh42mfu2hmxufxfmu28 src_hash=2xf9mf4jmfijjumrfx2r9mjfru2mjrm9j   name=jayson Src_name=jayson NAME=jayson SubjectUserName=jayson   I'm trying to make a query checks if there is a field that contains the word "hash" or "name" and tables it out. Here's what I have so far:   | eval Hash=hash, Hash=hashes | foreach Hash* [eval Hash=mvappend(Hash, "")] | eval Name=name, Name=Src_name | foreach Name* [eval Name=mvappend(Name, "") | table Name Hash I need to table the results from any field that has the word "hash" or "name" in it. Also is there a way to simplify this?