All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I got a requirement to extract " response_time: " field value from all logs and display table with name cf_rt I tried creating new field but its not returning data from all matching case. please advi... See more...
I got a requirement to extract " response_time: " field value from all logs and display table with name cf_rt I tried creating new field but its not returning data from all matching case. please advise. Log format: response_time:0.091901 gorouter_time:0.000400 app_id:"038e332a-4423-426b-9693-2488eafcd37d"
I get the following message when trying to restore a saved search in VersionControl For Splunk /* Unknown failure, received a non-200 response code of 400 on the url https://localhost:8089/servi... See more...
I get the following message when trying to restore a saved search in VersionControl For Splunk /* Unknown failure, received a non-200 response code of 400 on the url https://localhost:8089/services/splunkversioncontrol_rest_restore, reason Bad Request, text result is <msg type="WARN">An exception was thrown while dispatching the python script handler. */
I have json files that have multiple events per file. However when I ingest the data, Splunk parses some of the timestamps correctly and gives other events from the same file the timestamp of when th... See more...
I have json files that have multiple events per file. However when I ingest the data, Splunk parses some of the timestamps correctly and gives other events from the same file the timestamp of when the data was indexed. Anyone else had this problem and know a solution/explanation? All-time search of the source(which is path name that ends with json filename) in picture to show results Thanks in advance props.conf [sourcetype] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true AUTO_KV_JSON=false CHARSET=UTF-8 INDEXED_EXTRACTIONS=json KV_MODE=none TRUNCATE=20000 category=Structured description=JavaScript Object Notation format. For more information, visit http://json.org/ disabled=false pulldown_type=true TIME_PREFIX="date":+
05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" snmp_stanza:snmp://Encryption_Keymgmt_Devices 05-13-2020 16:09:23.381 -0500 ERROR... See more...
05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" snmp_stanza:snmp://Encryption_Keymgmt_Devices 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" ;PyAsn1Error: Can't coerce into integer: Null instance has no attribute 'trunc' 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" 'Can\'t coerce %s into integer: %s' % (value, sys.exc_info()[1]) 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" ; File "/opt/splunk/etc/apps/snmp_ta/bin/pyasn1-0.1.6-py2.7.egg/pyasn1/type/univ.py", line 76, in prettyIn 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" value = self.prettyIn(value) 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" ; File "/opt/splunk/etc/apps/snmp_ta/bin/pyasn1-0.1.6-py2.7.egg/pyasn1/type/base.py", line 68, in init 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" self, value, tagSet, subtypeSpec 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" ; File "/opt/splunk/etc/apps/snmp_ta/bin/pyasn1-0.1.6-py2.7.egg/pyasn1/type/univ.py", line 22, in init 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" return self.class(value, tagSet, subtypeSpec, namedValues) 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" ; File "/opt/splunk/etc/apps/snmp_ta/bin/pyasn1-0.1.6-py2.7.egg/pyasn1/type/univ.py", line 107, in clone 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" value = varName.getMibNode().getSyntax().clone(value) 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" ; File "/opt/splunk/etc/apps/snmp_ta/bin/pysnmp-4.2.5-py2.7.egg/pysnmp/entity/rfc3413/oneliner/cmdgen.py", line 241, in unmakeVarBinds 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" [ self.unmakeVarBinds(varBindTableRow, lookupNames, lookupValues) for varBindTableRow in varBindTable ], 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" ; File "/opt/splunk/etc/apps/snmp_ta/bin/pysnmp-4.2.5-py2.7.egg/pysnmp/entity/rfc3413/oneliner/cmdgen.py", line 334, in cbFun 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" varBindTable, cbCtx): 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" ; File "/opt/splunk/etc/apps/snmp_ta/bin/pysnmp-4.2.5-py2.7.egg/pysnmp/entity/rfc3413/cmdgen.py", line 471, in _handleResponse 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" (cbFun, cbCtx), 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" ; File "/opt/splunk/etc/apps/snmp_ta/bin/pysnmp-4.2.5-py2.7.egg/pysnmp/entity/rfc3413/cmdgen.py", line 156, in processResponsePdu 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" cachedParams['cbCtx'] 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" ; File "/opt/splunk/etc/apps/snmp_ta/bin/pysnmp-4.2.5-py2.7.egg/pysnmp/proto/rfc3412.py", line 453, in receiveMessage 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" self, transportDomain, transportAddress, wholeMsg 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" ; File "/opt/splunk/etc/apps/snmp_ta/bin/pysnmp-4.2.5-py2.7.egg/pysnmp/entity/engine.py", line 64, in __receiveMessageCbFun 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" self, transportDomain, transportAddress, incomingMessage 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" ; File "/opt/splunk/etc/apps/snmp_ta/bin/pysnmp-4.2.5-py2.7.egg/pysnmp/carrier/base.py", line 52, in _cbFun 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" self._cbFun(self, transportAddress, incomingMessage) 05-13-2020 16:09:23.381 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" ; File "/opt/splunk/etc/apps/snmp_ta/bin/pysnmp-4.2.5-py2.7.egg/pysnmp/carrier/asynsock/dgram/base.py", line 83, in handle_read 05-13-2020 16:09:23.380 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" self.handle_read() 05-13-2020 16:09:23.380 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" ; File "/opt/splunk/lib/python2.7/asyncore.py", line 449, in handle_read_event 05-13-2020 16:09:23.380 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" obj.handle_read_event() 05-13-2020 16:09:23.380 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" ; File "/opt/splunk/lib/python2.7/asyncore.py", line 108, in readwrite 05-13-2020 16:09:23.380 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" obj.handle_error() 05-13-2020 16:09:23.380 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" ; File "/opt/splunk/lib/python2.7/asyncore.py", line 123, in readwrite 05-13-2020 16:09:23.380 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" readwrite(obj, flags) 05-13-2020 16:09:23.380 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" ; File "/opt/splunk/lib/python2.7/asyncore.py", line 201, in poll2 05-13-2020 16:09:23.380 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" poll_fun(timeout, map) 05-13-2020 16:09:23.380 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" ; File "/opt/splunk/lib/python2.7/asyncore.py", line 220, in loop 05-13-2020 16:09:23.380 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" use_poll=True, map=self.sockMap, count=1) 05-13-2020 16:09:23.380 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" ; File "/opt/splunk/etc/apps/snmp_ta/bin/pysnmp-4.2.5-py2.7.egg/pysnmp/carrier/asynsock/dispatch.py", line 37, in runDispatcher 05-13-2020 16:09:23.380 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/snmp_ta/bin/snmp.py" Exception with nextCmd to :161: poll error: Traceback (most recent call last):
Hi Team, Need your expert advise on how can I configure my logstash.conf file to forward only the ERROR OR WARN log lines to Splunk. I have done some online research that a grok filter or wrapping... See more...
Hi Team, Need your expert advise on how can I configure my logstash.conf file to forward only the ERROR OR WARN log lines to Splunk. I have done some online research that a grok filter or wrapping the output with if condition can be used in order the acheive the required result. I would appreciate if you could share a working example on the same. Many thanks!
Trying to configure the Palo Alto app and the config page does not load (404 not found). I'm running Splunk Enterprise 8.0.2 and Palo Alto app and add-on 6.2.
I was curious, and was not able to find an answer online or here, if you are able to create custom eval subcommands. What I mean by this are things like mvcount() or dc(). I have custom commands ... See more...
I was curious, and was not able to find an answer online or here, if you are able to create custom eval subcommands. What I mean by this are things like mvcount() or dc(). I have custom commands in a custom app using python now but rather than needing to call a whole new command I would like to do some of these in just an eval. For example I made a macro that can convert a int of seconds into a human readable string to help display time deltas better. e.g 6234 would become "1 hour 43 minutes and 54 seconds". I would like to do something like: | eval cleanTime = duration(seconds) Rather than building a full custom command to do the following: | duration outputfield=cleanTime seconds I know the function's code are locked and are part of the source code but can I add to it?
Has anyone had any success writing field extractions for O365 based events collected via the API? The messages that are generated are HUGE and have multiple fields that contain multiple values. ... See more...
Has anyone had any success writing field extractions for O365 based events collected via the API? The messages that are generated are HUGE and have multiple fields that contain multiple values. I have tried to use eval and mvindex to see if its possible to extract those values but it doesn't appear to be working and I am wondering if its because of the JSON format. Writing a regex for one of these events would have me ending up with something a page long lol. Thanks, Andrew
Hello every one, Issue- Multiselect is not working if I remove all the fields from multi select box, it is not hiding it is showing waiting for data. want to display- if i remove all the value fro... See more...
Hello every one, Issue- Multiselect is not working if I remove all the fields from multi select box, it is not hiding it is showing waiting for data. want to display- if i remove all the value from multi select it should hide the panel. some token issue i believe. can anyone help me? <form hideEdit="false" theme="dark"> <label>test</label> <search> <query> | makeresults | eval ID= len($test$) | fillnull value="ABC" | where testl!="ABC" | table testl </query> <done> <condition match="$job.resultCount$&gt;0"> <set token="display_count">true</set> </condition> <condition> <unset token="display_count"></unset> </condition> </done> </search> <fieldset autoRun="true" submitButton="false"></fieldset> <row> <panel> <input type="dropdown" token="director" searchWhenChanged="true"> <label>Managing Director</label> <choice value="*">All</choice> <fieldForLabel>Managing Director</fieldForLabel> <fieldForValue>Managing Director</fieldForValue> <search> <query>| loadjob savedsearch="test100" | search Service="$area$" CapabilityIN ($lead$) table "Director"</query> <done> <set token="enabledownload">none</set> <unset token="display_details"></unset> </done> </search> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="lead" searchWhenChanged="true"> <label>Service Manager</label> <choice value="*">All</choice> <fieldForLabel>Service Manager</fieldForLabel> <fieldForValue>Service Manager</fieldForValue> <search> <query>| loadjob savedsearch="test100" | search Service="$area$" DirectorIN ($director$) | table "Service Manager"</query> <done> <set token="enabledownload">none</set> <unset token="display_details"></unset> </done> </search> <default>*</default> <initialValue>*</initialValue> </input> <input type="time" token="date_range" searchWhenChanged="true"> <label>Date Range</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="multiselect" token="test" searchWhenChanged="true"> <label> (Select Maximum 5)</label> <search> <done> <set token="enabledownload">none</set> <set token="display_details"></set> </done> <query>| loadjob savedsearch="test100"| search Service="$area$" DirectorIN ($director$) | table "" </query> </search> <change> <eval token="form.test">if(mvcount('form.test')=6, mvindex('form.test',0,4),'form.test')</eval> </change> <fieldForLabel></fieldForLabel> <fieldForValue></fieldForValue> <delimiter>,</delimiter> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> </input> </panel> </row> <row> <panel id="panel1" depends="$display_count$"> <title></title> <single> <search> <query></query> <earliest>$date_range.earliest$</earliest> <latest>$date_range.latest$</latest> <sampleRatio>1</sampleRatio> </search> <drilldown> <set token="display_details">true</set> <set token="level">404</set> <set token="enabledownload">none</set> </drilldown> </single> </panel> </row> <row> <panel depends="$display_details$"> <html> <a role="button" href="/api/search/jobs/$export_sid$/results?isDownload=true&amp;timeFormat=%25FT%25T.%25Q%25%3Az&amp;maxLines=0&amp;count=0&amp;filename=test-Issues.csv&amp;outputMode=csv" class="btn btn-primary" style="display:$enabledownload$">Download Details (CSV)</a> </html> <table> <title>Details</title> <search> <done> <set token="export_sid">$job.sid$</set> <set token="enabledownload">inline</set> </done> <query>savesearch=test100</query> <earliest>$date_range.earliest$</earliest> <latest>$date_range.latest$</latest> </search> <option name="count">3</option> <option name="drilldown">none</option> </table> </panel> </row> </form>
I'm trying to build a search that displays the count of individual source IP addresses based on some criteria for each firewall. Below is the closest I've been able to get. I've tried about 15 variat... See more...
I'm trying to build a search that displays the count of individual source IP addresses based on some criteria for each firewall. Below is the closest I've been able to get. I've tried about 15 variations of | stats, | chart and | timechart combinations for this. The goal is to get a line graph of each count of source IP addresses in a trellis separated by firewall name. Instead of seeing the total count as the timechart below displays. | timechart count(ip) by fw_name I'm looking for a '| timechart count by ip' style output but with trellises separated by fw_name. After hours of searching I'm not sure it's doable.
Hi Splunk users, After I successfully deployed a Splunk standalone, I see this error message reg Searches skipped: Root Cause(s): The percentage of non high priority searches skipped (58%) ove... See more...
Hi Splunk users, After I successfully deployed a Splunk standalone, I see this error message reg Searches skipped: Root Cause(s): The percentage of non high priority searches skipped (58%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=29. Total skipped Searches=17 I'm not sure how to debug this. Thanks in advance for the help!
Hello Everyone, I need help with two questions. Please consider below scenario: index=foo source="A" OR source="B" OR source="C" OR source="D" OR source="E" OR source="F" OR source="G" OR sourc... See more...
Hello Everyone, I need help with two questions. Please consider below scenario: index=foo source="A" OR source="B" OR source="C" OR source="D" OR source="E" OR source="F" OR source="G" OR source="H" OR source="I" earliest=-14d | bin _time span=1d | fields - index - splunk_server - punct - linecount - splunk_server_group - tag - tag::eventtype - unix_category - unix_group - _raw - eventtype - host - source - sourcetype | table * | untable _time FieldName FieldValue | stats count as Event_count, dc(FieldValue) as distinctCount, mean(FieldValue) as mean by _time FieldName | sort - _time The output of above search is table as below _time FieldName Event_count distinctCount mean 05/13/2020 Field1 520 520 05/13/2020 Field2 77 56 05/13/2020 Field3 1183 1177 450 05/13/2020 Field4 1785 1785 3164.5299719887953 I have similar values for last 14 days and these field values in the FieldName column are coming from various sources as mentioned in the search. Now, is it possible to add the respective source column for each of those field values? Here some fields are coming from multiple sources so is it possible to divide the count values based on the source it is coming from? (For instance - Field 1 could be coming from source E and F with counts 260 from E and 260 from F) I have another search as below: index=foo source=* earliest=-14d | bin _time span=1d | stats count as totalCount by _time source | sort - _time The output of above search is table as below _time source totalCount 2020-05-13 A 283 2020-05-13 B 1785 2020-05-13 C 252 2020-05-13 D 507 2020-05-13 E 336 2020-05-13 F 10527 2020-05-13 G 1183 2020-05-13 H 2586 Now, my another question is that I would like to join both of these tables using the columns _time and source to get count of the source from 2nd table to be added in the first table for each field values based on the source it is coming from. Any help would be appreciated!!
How can I check the functionality of TA-meraki app from Splunk. Also how can check whether Splunk ingesting meraki logs.
I have a splunk cluster with ten indexers and I want to start sending frozen data to our ECS solution via S3. The problem I'm running into is that the script when called puts the data in whatever ... See more...
I have a splunk cluster with ten indexers and I want to start sending frozen data to our ECS solution via S3. The problem I'm running into is that the script when called puts the data in whatever folder is specified in the script. So all data being frozen from each indexer and index is being written to the same location. It seems I could solve this by creating a separate script for each indexer but that would have me maintaining a separate script on each indexer as well as for each index I need to freeze. That sounds like a support nightmare. What is best practice here and how can i bake into the script the correct organizational structure for the buckets to land in on the ECS?
the default value is "item.timestamp", this send splunk the timestamp of the cloudwatch log, and not the eventTime. i have tried replacing it with "parsed.eventTime" "payload.eventTime" etc, all resu... See more...
the default value is "item.timestamp", this send splunk the timestamp of the cloudwatch log, and not the eventTime. i have tried replacing it with "parsed.eventTime" "payload.eventTime" etc, all result in failure to send logs. what is the correct object to get eventTime as the logtime
Hello good people, I have installed the Deep learning Toolkit in Splunk and pulled the MLTK image in my standalone setup. Everything seems to be configured well, but when I hit the Jupyterlab in t... See more...
Hello good people, I have installed the Deep learning Toolkit in Splunk and pulled the MLTK image in my standalone setup. Everything seems to be configured well, but when I hit the Jupyterlab in the containers tab, I am prompted to provide a password. The thing is I didn't setup any password. Have you encountered the same issue ?
I have events being sent to Splunk which will have the following fields MsgID, Status(Failure/Success) I need to get the list of top 5 MsgIDs with maximum failures. And display each of the 5 MsgI... See more...
I have events being sent to Splunk which will have the following fields MsgID, Status(Failure/Success) I need to get the list of top 5 MsgIDs with maximum failures. And display each of the 5 MsgIDs in a pie chart, with success and failure percentages. I am able to get the top 5 failures, but unable to figure out how to get both success and faliures as percentages for top 5 failures. Please help
I have a JSON string as an event in Splunk below: {"Item1":{"Max":100,"Remaining":80},"Item2":{"Max":409,"Remaining":409},"Item3":{"Max":200,"Remaining":100},"Item4":{"Max":5,"Remaining":5},"Item5... See more...
I have a JSON string as an event in Splunk below: {"Item1":{"Max":100,"Remaining":80},"Item2":{"Max":409,"Remaining":409},"Item3":{"Max":200,"Remaining":100},"Item4":{"Max":5,"Remaining":5},"Item5":{"Max":2,"Remaining":2}} Splunk can get fields like "Item1.Max" etc, but when I tried to calculate "Item1.Remaining"/"Item1.Max", it doesn't recognize it as numbers. The convert or tonumber function doesn't work on them. Also how to convert the string to table like below? Items Max Remaining Percentage Item1 100 80 80 Item2 409 409 100 Item3 200 100 50 Item4 5 5 100 Item5 2 2 100
Hi, I am trying to create an alert where if there is a sudden traffic increase on the site with 404's it should get triggered. Instead of number I think writing an alert based on the percentage of ... See more...
Hi, I am trying to create an alert where if there is a sudden traffic increase on the site with 404's it should get triggered. Instead of number I think writing an alert based on the percentage of traffic would be effective to avoid false positives. For example, I have X number of traffic at 14:00 and Y number of traffic at 14:30 then we should have an alert at 15:00 if the percentage is very high let's say > 20% index=test_env host=server-1* status=404 Any guidance is appreciated.
i tried to uninstall, reinstall app i tried to change the builde version any other ideas ? Thanks Regards