All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everyone, My client has indexes with events which are sometimes really large. The problem is that field extraction in such cases doesn't work properly. For example, opening an event shows the wh... See more...
Hi everyone, My client has indexes with events which are sometimes really large. The problem is that field extraction in such cases doesn't work properly. For example, opening an event shows the whole raw event, but fields below it are trimmed. If the field was a few thousands characters long, in the fields view below the event only about a thousand first characters are shown. Moreover, efforts to manipulate such fields produce unspecified results, e.g.,     | eval len_x = len(field_x)     returns 71, although the field is several thousands characters long. Searches targeting such events sometimes fail, e.g., specifying an event with an ID: event_uid=unique_id (a field-value combination present in the event) doesn't return anything, although a less specific search with the same time frame returns that event. We also tried to tackle the problem at the source, i.e., to shorten the field with excessive length before indexing     | eval field_x = if(len(field_x) > 1000, substr(field_x, 1, 1000) . "(oversized field trimmed)", field_x)     but this only trimmed the fields, without adding the text in the brackets. So, since I haven't managed to find it in the documentation, I would like to ask the following: is there a limit for the field length and does it depend on the overall event size? How to deal with such long fields? Thanks and kind regards, Krunoslav Ivesic
Hi Guys, I have a host_blackout.csv, and I want to update the blackout for three hosts(mep1,mep2,mep3) among the 30 hosts I have: 1) the new end_time should be updated to end of next week("08/28/... See more...
Hi Guys, I have a host_blackout.csv, and I want to update the blackout for three hosts(mep1,mep2,mep3) among the 30 hosts I have: 1) the new end_time should be updated to end of next week("08/28/202 11:00"). My output looks like this: end_time host notes start_time 08/18/2022 09:00 mep1 INC000006 08/14/2022 23:00 08/11/2022 09:00 mep2 INC000002 08/11/2022 20:15 08/12/2022 10:00 mep3 INC000003 08/10/2022 12:00 08/10/2022 09:00 mep4 INC000004 08/06/2022 23:00 08/05/2022 09:00 mep5 INC0000012 10/27/2018 00:00 08/05/2022 09:00 mep6 INC00000123 08/03/2022 23:00 08/05/2022 09:00 mep7 INC000002537 10/27/2018 00:00 08/05/2022 09:00 mep8 INC0000011 11/20/2018 00:00 08/05/2022 09:00 mep9   Can you help please?
Hello   After upgrading from and earlier version to 3.0.9, since i saw there were people having the JavaScript issue I was trying to fix, the app isnt creating incidents anymore. I found this in... See more...
Hello   After upgrading from and earlier version to 3.0.9, since i saw there were people having the JavaScript issue I was trying to fix, the app isnt creating incidents anymore. I found this in the alert_manager_scheduler.log which is the only log of alert manager that has logs. I have checked the kvstore, its ready on all shc members but none of the alert metadata is getting created.     2022-08-17 13:42:19,996 WARNING pid="5761" logger="alert_manager_scheduler" message="KV Store is not yet available, sleeping for 1s." (alert_manager_scheduler.py:62)       The alerts run, they try to send, but get this in the splunkd.log     08-17-2022 13:46:05.489 -0400 INFO sendmodalert [25767 AlertNotifierWorker-0] - Invoking modular alert action=alert_manager for search="Widows logging" sid="scheduler__<user>__search__RMD5467d08babc5954da_at_1660758360_111_64D51C26-A29A-41E8-917F-9211B53D56B5" in app="search" owner="<user>" type="saved" 08-17-2022 13:46:06.095 -0400 ERROR sendmodalert [25767 AlertNotifierWorker-0] - action=alert_manager STDERR - Traceback (most recent call last): 08-17-2022 13:46:06.095 -0400 ERROR sendmodalert [25767 AlertNotifierWorker-0] - action=alert_manager STDERR - File "/opt/splunk/etc/apps/alert_manager/bin/alert_manager.py", line 574, in <module> 08-17-2022 13:46:06.095 -0400 ERROR sendmodalert [25767 AlertNotifierWorker-0] - action=alert_manager STDERR - config = getIncidentSettings(payload, settings, search_name, sessionKey) 08-17-2022 13:46:06.095 -0400 ERROR sendmodalert [25767 AlertNotifierWorker-0] - action=alert_manager STDERR - File "/opt/splunk/etc/apps/alert_manager/bin/alert_manager.py", line 484, in getIncidentSettings 08-17-2022 13:46:06.095 -0400 ERROR sendmodalert [25767 AlertNotifierWorker-0] - action=alert_manager STDERR - if ('impact' in result or result['impact'] != ''): 08-17-2022 13:46:06.095 -0400 ERROR sendmodalert [25767 AlertNotifierWorker-0] - action=alert_manager STDERR - KeyError: 'impact' 08-17-2022 13:46:06.142 -0400 INFO sendmodalert [25767 AlertNotifierWorker-0] - action=alert_manager - Alert action script completed in duration=651 ms with exit code=1 08-17-2022 13:46:06.142 -0400 WARN sendmodalert [25767 AlertNotifierWorker-0] - action=alert_manager - Alert action script returned error code=1 08-17-2022 13:46:06.142 -0400 ERROR SearchScheduler [25767 AlertNotifierWorker-0] - Error in 'sendalert' command: Alert script returned error code 1., search='sendalert alert_manager results_file="/opt/splunk/var/run/splunk/dispatch/scheduler__<user>__search__RMD5467d08babc5954da_at_1660758360_111_64D51C26-A29A-41E8-917F-9211B53D56B5/results.csv.gz" results_link="https://<host>:8000/app/search/@go?sid=scheduler__<user>__search__RMD5467d08babc5954da_at_1660758360_111_64D51C26-A29A-41E8-917F-9211B53D56B5"'       does anyone have any idea what might be going on? Thanks for your assistance
Dear All, I have a pretty bare Splunk Universal Forwarder that was installed at 8.2.5 and had no errors on restart, but when I upgraded it to 9.0.0.1, I started to get the following errors? NOTE: T... See more...
Dear All, I have a pretty bare Splunk Universal Forwarder that was installed at 8.2.5 and had no errors on restart, but when I upgraded it to 9.0.0.1, I started to get the following errors? NOTE: These are all in the system/default files (so not my settings): Invalid key in stanza [webhook] in /opt/splunkforwarder/etc/system/default/alert_actions.conf, line 229: enable_allowlist (value: false). Invalid key in stanza [provider:splunk] in /opt/splunkforwarder/etc/system/default/federated.conf, line 20: mode (value: standard). Invalid key in stanza [general] in /opt/splunkforwarder/etc/system/default/federated.conf, line 23: needs_consent (value: true).
Currently I have used a similar query to what is below to plot data on a 24 hour graph. index=mock_index source=mock_source.log param1 param2 param3 | rex field=_raw "Latency: (?<latency>[0-9]+)" ... See more...
Currently I have used a similar query to what is below to plot data on a 24 hour graph. index=mock_index source=mock_source.log param1 param2 param3 | rex field=_raw "Latency: (?<latency>[0-9]+)" | eval time = mvjoin(mvindex(split(_raw, " "), 0, 1), " ") | eval time = strptime(time, "%Y-%m-%d %H:%M:%S,%3N") | table time, latency An example event: 2022-08-16 14:04:34,123 INFO [stuff] Latency: 55 [stuff] Ideally I would like to get latency averages over 5 minute periods, and display the data to a graph where the x-axis labels 30 minute intervals.  Given this goal, is strptime() the best way to manage the timestamps in my events?
Hello! Can I please have help with making a table that allows people to type text that will create a new row in the table? I have made one already, but once someone types text, it does not clear t... See more...
Hello! Can I please have help with making a table that allows people to type text that will create a new row in the table? I have made one already, but once someone types text, it does not clear the input text previously entered. Once someone entered their text and presses submit I want the text boxes to go back to being blank (circled in blue).  Also, Is there a way someone can delete a row in the table after it is added in case they put in the wrong information or it is not relevant anymore? Thank you!!!
I’m working with a kvstore since the Netskope IP information needs updating.  I figured out how to add to it using this SPL     | makeresults | eval Data="aaa.bbb.ccc.ddd/mask" | eval Desc="Net... See more...
I’m working with a kvstore since the Netskope IP information needs updating.  I figured out how to add to it using this SPL     | makeresults | eval Data="aaa.bbb.ccc.ddd/mask" | eval Desc="Netskope" | eval Type="netskope_ip" | outputlookup append=true override_if_empty=false my_kvstore     I found multiple examples of how to delete curl -k -u admin:yourpassword -X DELETE https://localhost:8089/servicesNS/nobody/kvstoretest/storage/collections/data/kvstorecoll/5410be5441... The thing is I can’t find the data path.  I can’t use the above command replacing the path the correct path if I can’t figure out the correct path.  Any suggestions? TIA, Joe
Hello Folks , I have json data in below format. I am looking for a best solution to table list of Keys which can be eventually used for input dropdown in dashboard. output of the table content ne... See more...
Hello Folks , I have json data in below format. I am looking for a best solution to table list of Keys which can be eventually used for input dropdown in dashboard. output of the table content needs to be like below. your help is much appreciated. bzk.f1 bzk.f4 bzk.f8 { [-]    bzk: { [-]      f1: ABC      f4: ABC      f8: ABC } }
Currently using a manual verification of non US logins: sourcetype="o365:management:activity" | iplocation ActorIpAddress | search Country!="United States"  action=success | stats count by User... See more...
Currently using a manual verification of non US logins: sourcetype="o365:management:activity" | iplocation ActorIpAddress | search Country!="United States"  action=success | stats count by UserId, Operation, ActorIpAddress, Country, action | sort -count  I am wanting to create a search that will show failed logins followed by a success for a user regardless of source ip. Thanks.
I'm having issues properly extracting all the fields I'm after from some json.  The logs are from a script that dumps all the AWS Security Groups into a json file that is ingested into Splunk by a UF... See more...
I'm having issues properly extracting all the fields I'm after from some json.  The logs are from a script that dumps all the AWS Security Groups into a json file that is ingested into Splunk by a UF.  Below is a sanitized example of the output of one AWS Security Group.   I've tried various iterations of spath with mvzip, mvindex, mvexpand.  I've also tried to no avail using foreach.  I'm stumped as to how to get Splunk to pull out each instance of CidrIp and Description inside the FromPort.   The end goal is to be able to search for a port or an address and get back all the corresponding info. Example Search: index=something FromPort=22 | table FromPort, CidrIp, Description, ToPort Example Results FromPort, CidrIp, Description, ToPort 22, 10.10.10.1, Server01 SSH rule, 22 22, 10.10.10.2, Server 002 inbound , 22 etc....   Right now my extracting the fields only results in the first field for each rule. When working correctly it would look like this and would contain all the rules in the log.     | makeresults | eval _raw="{ \"Description\": \"Rules for server\", \"GroupId\": \"sg-02d3a65ece83ba3a98\", \"GroupName\": \"Fake group name\", \"IpPermissions\": [ { \"FromPort\": 22, \"IpProtocol\": \"tcp\", \"IpRanges\": [ { \"CidrIp\": \"10.64.77.59/32\", \"Description\": \"Monitoring App - SSH\" }, { \"CidrIp\": \"10.64.77.24/32\", \"Description\": \"Monitoring App - SSH\" }, { \"CidrIp\": \"10.64.77.29/32\", \"Description\": \"Some Host - SSH\" }, { \"CidrIp\": \"10.64.77.11/32\", \"Description\": \"Monitoring App - SSH\" }, { \"CidrIp\": \"10.64.77.136/32\", \"Description\": \"SSH\" }, { \"CidrIp\": \"10.64.77.171/32\", \"Description\": \"SSH\" }, { \"CidrIp\": \"10.64.77.37/32\", \"Description\": \"Monitoring App - SSH\" }, { \"CidrIp\": \"10.64.77.174/32\", \"Description\": \"Server003\" }, { \"CidrIp\": \"10.64.77.154/32\", \"Description\": \"Server004\" }, { \"CidrIp\": \"10.226.109.245/32\", \"Description\": \"Server to Server\" }, { \"CidrIp\": \"10.226.109.157/32\", \"Description\": \"Another server to other stuff\" }, { \"CidrIp\": \"10.226.109.172/32\", \"Description\": \"Another server to other stuff\" } ], \"Ipv6Ranges\": [], \"PrefixListIds\": [], \"ToPort\": 22, \"UserIdGroupPairs\": [] }, { \"FromPort\": 49763, \"IpProtocol\": \"tcp\", \"IpRanges\": [ { \"CidrIp\": \"10.64.77.59/32\", \"Description\": \"Monitoring - Other Ports\" }, { \"CidrIp\": \"10.64.77.24/32\", \"Description\": \"Monitoring - Other Ports\" }, { \"CidrIp\": \"10.64.77.37/32\", \"Description\": \"Monitoring - Other Ports\" }, { \"CidrIp\": \"10.64.77.11/32\", \"Description\": \"Monitoring - Other Ports\" }, { \"CidrIp\": \"10.226.109.157/32\", \"Description\": \"Over here to over there\" }, { \"CidrIp\": \"10.226.109.172/32\", \"Description\": \"Over here to over there\" } ], \"Ipv6Ranges\": [], \"PrefixListIds\": [], \"ToPort\": 35226, \"UserIdGroupPairs\": [] }, { \"FromPort\": 139, \"IpProtocol\": \"tcp\", \"IpRanges\": [ { \"CidrIp\": \"10.64.77.29/32\", \"Description\": \"Server 007 - Netbios\" } ], \"Ipv6Ranges\": [], \"PrefixListIds\": [], \"ToPort\": 139, \"UserIdGroupPairs\": [] }, { \"FromPort\": 135, \"IpProtocol\": \"tcp\", \"IpRanges\": [ { \"CidrIp\": \"10.64.77.29/32\", \"Description\": \"Server 007 - DCOM\" } ], \"Ipv6Ranges\": [], \"PrefixListIds\": [], \"ToPort\": 135, \"UserIdGroupPairs\": [] }, { \"FromPort\": 445, \"IpProtocol\": \"tcp\", \"IpRanges\": [ { \"CidrIp\": \"10.64.77.29/32\", \"Description\": \"Server 007 - MS-DS\" } ], \"Ipv6Ranges\": [], \"PrefixListIds\": [], \"ToPort\": 445, \"UserIdGroupPairs\": [] }, { \"FromPort\": 443, \"IpProtocol\": \"tcp\", \"IpRanges\": [ { \"CidrIp\": \"10.64.77.29/32\", \"Description\": \"Server 007 - HTTPS\" } ], \"Ipv6Ranges\": [], \"PrefixListIds\": [], \"ToPort\": 443, \"UserIdGroupPairs\": [] }, { \"FromPort\": -1, \"IpProtocol\": \"icmp\", \"IpRanges\": [ { \"CidrIp\": \"10.64.77.59/32\", \"Description\": \"Monitoring Server - ICMP\" }, { \"CidrIp\": \"10.64.77.24/32\", \"Description\": \"Ping\" }, { \"CidrIp\": \"10.64.77.11/32\", \"Description\": \"Monitoring Server - ICMP\" }, { \"CidrIp\": \"10.64.77.37/32\", \"Description\": \"Monitoring Server - ICMP\" }, { \"CidrIp\": \"10.226.109.157/32\", \"Description\": \"Over here to over there\" }, { \"CidrIp\": \"10.226.109.172/32\", \"Description\": \"Over here to over there\" } ], \"Ipv6Ranges\": [], \"PrefixListIds\": [], \"ToPort\": -1, \"UserIdGroupPairs\": [] }, { \"FromPort\": 1024, \"IpProtocol\": \"tcp\", \"IpRanges\": [ { \"CidrIp\": \"10.64.77.29/32\", \"Description\": \"Server 007 - High Ports\" } ], \"Ipv6Ranges\": [], \"PrefixListIds\": [], \"ToPort\": 65535, \"UserIdGroupPairs\": [] } ], \"IpPermissionsEgress\": [ { \"IpProtocol\": \"-1\", \"IpRanges\": [ { \"CidrIp\": \"0.0.0.0/0\" } ], \"Ipv6Ranges\": [], \"PrefixListIds\": [], \"UserIdGroupPairs\": [] } ], \"OwnerId\": \"223310898711\", \"VpcId\": \"vpc-192ac32be1b1a987c\" }" | spath IpPermissions{}.FromPort output=a_FromPort | spath IpPermissions{}.IpProtocol output=a_IpProtocol | spath IpPermissions{}.IpRanges{}.CidrIp output=a_CidrIp | spath IpPermissions{}.IpRanges{}.Description output=a_Description | spath IpPermissions{}.ToPort output=a_ToPort | eval a_zipped=mvzip(mvzip(mvzip(mvzip(a_FromPort, a_IpProtocol), a_CidrIp), a_Description), a_ToPort) | mvexpand a_zipped | eval b_FromPort=mvindex(split(a_zipped,","),0), b_IpProtocol=mvindex(split(a_zipped,","),1), b_CidrIp=mvindex(split(a_zipped,","),2), b_Description=mvindex(split(a_zipped,","),3), b_ToPort=mvindex(split(a_zipped,","),4) | table b_FromPort, b_IpProtocol, b_CidrIp, b_Description, b_ToPort, a_zipped    
I'm running into a strange behavior: For the first time opening my dashboard, the dashboard always shows no visualization for the data, as if the query for the dashboard has not been executed or the... See more...
I'm running into a strange behavior: For the first time opening my dashboard, the dashboard always shows no visualization for the data, as if the query for the dashboard has not been executed or the query produces no data.  Simply reloading the page still does not show the visualization for the queried data. However, if I edit the input field from which the query uses the token value for the query, as long as the value entered is different from the existing value, the dashboard will show the expected visual.  But if I just reload the page again, then the visual would disappear!  The dashboard uses a Splunk extension written in Javascript&colon; But the same extension works with another dashboard without the reloading problem. How can I approach solving the mystery?
I am developing a Reporting command. However, a problem was found in the search command I made. To explain the problem, I have attached some code snippets of splunk_python_sdk. 0. First, the fin... See more...
I am developing a Reporting command. However, a problem was found in the search command I made. To explain the problem, I have attached some code snippets of splunk_python_sdk. 0. First, the final query I want to run is index=splunk_example | table test, test_results | customcommand(reporting command) However, the command was not executed properly. However, the strange thing was that this command was executed using the index stored in the search header. However, the command was not executed on the index stored in the indexer.   1. found out - So what I found out was that the map phase runs on the indexer and the rest of the reduce phase runs on the search header. - And I checked the source code provided by the SDK, and I thought that the data extracted from the indexer can be passed as records when the map function is executed after checking the following related to the map function. -> Q) Execution of the reporting command is done in two stages: map and reduce, but only  the reduce function actually executes. Is there any way to run the map function?       ## 1-1. Here is the code provided by the sdk. def map(self, records): """ Override this method to compute partial results. :param records: :type records: You must override this method, if :code:`requires_preop=True`. """ return NotImplemented def prepare(self): phase = self.phase if phase == 'map': # noinspection PyUnresolvedReferences self._configuration = self.map.ConfigurationSettings(self) return if phase == 'reduce': streaming_preop = chain((self.name, 'phase="map"', str(self._options)), self.fieldnames) self._configuration.streaming_preop = ' '.join(streaming_preop) return raise RuntimeError('Unrecognized reporting command phase: {}'.format(json_encode_string(six.text_type(phase)))) def reduce(self, records): """ Override this method to produce a reporting data structure. You must override this method. """ raise NotImplementedError('reduce(self, records)')         2. how i tried - In this code, it is suggested to enable the requires_preop option to true to run the map function.       @Configuration() class TestReportingCommand(ReportingCommand): @Configuration(requires_preop = True) def map(self, records): ... def reduce(self, records):         - commands.conf (add) requires_preop = true   So, I tried both methods, but when I took a log, only the reduce function was executed and the map function was not executed.   If you know how to use map, please share. I used a translator, so there may be some awkwardness in the text.
Hi, Is there any documentation where we check the meaning of these attributes in the adrum payload. Regards Pranjal
Hello, hoping someone can guide me here. Trying to find a way to have a single usernames dashboard session timeout be different from the group that they are in. Using Splunk enterprise and trying to ... See more...
Hello, hoping someone can guide me here. Trying to find a way to have a single usernames dashboard session timeout be different from the group that they are in. Using Splunk enterprise and trying to find a way to just change the single user's timeout. Is this possible? Can't find any resources online for this.  Thanks.
Hi all, I have a lookup instance_list, which I'm trying to use to filter my flow logs to only show the logs with the sourcetype as one of the instances I'm interested in, so: index="sample_data" [|... See more...
Hi all, I have a lookup instance_list, which I'm trying to use to filter my flow logs to only show the logs with the sourcetype as one of the instances I'm interested in, so: index="sample_data" [|inputlookup instance_list | search instancename="*dc*" | lookup eni_list instanceid OUTPUT eni as sourcetype | format] | ...... There are 3-6 instances that match the search="*dc*" - running the inputlookup section on its own produces the correct list.  Unfortunately I get no results, and applying the instance names to each log then filtering results in a really slow search. Any pointers are really welcome!
Hello one of our clients would like to send us data from its Kafka cluster to our AWS enviroment consisting of Heavy Forwarders. We then forward the data to on prem team which has indexers and search... See more...
Hello one of our clients would like to send us data from its Kafka cluster to our AWS enviroment consisting of Heavy Forwarders. We then forward the data to on prem team which has indexers and searcheads.   Now from reading the documentation I am kinda lost what is needed by us. Until now we were integrating new log sources via the pull method using SNS + SQS since there was not such a large amount of data as it is now.   From my understanding in order to use a Kafka push method with Kafka Connect: 1. Require the client to install Kafka Connect on their cluster 2. Create a kafka topic But what steps are required on the Heavy Forwarder side ? How do we subscribe to that topic ? Do we only need a HEC collector with token in order to forward the data to on prem team ? Thank you.  
Hello, We are trying to modify the existing query in the "Remote Desktop Network Bruteforce" correlation search present in the Splunk ES use cases to exclude events with the same session_id. The ... See more...
Hello, We are trying to modify the existing query in the "Remote Desktop Network Bruteforce" correlation search present in the Splunk ES use cases to exclude events with the same session_id. The original query is:   | tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime from datamodel=Network_Traffic where All_Traffic.app=rdp by All_Traffic.src All_Traffic.dest All_Traffic.dest_port | eventstats stdev(count) AS stdev avg(count) AS avg p50(count) AS p50 | where count>(avg + stdev*2) | rename All_Traffic.src AS src All_Traffic.dest AS dest | table firstTime lastTime src dest count avg p50 stdev | `remote_desktop_network_bruteforce_filter`   We have tried using the "dedup" command and the "distinct_count" function of stats command without success. Thanks in advance, Best Regards,
hi, is it possible to get a list of all scheduled scripts on a linux UF? similar to splunk list exec, but showing the next time, the script should run?
Hi All, I am trying to view a lookup file that has the sharing set on this app only from another app than it is defined. Is there anyway to achieve this without changing the permission in the G... See more...
Hi All, I am trying to view a lookup file that has the sharing set on this app only from another app than it is defined. Is there anyway to achieve this without changing the permission in the GUI? This is the SPL i'm running but it skips the lookup files that aren't being shared. Maybe temporary set the sharing to global and set it back or something  | rest splunk_server=local /servicesNS/-/-/data/lookup-table-files | fields title eai:acl.owner eai:acl.app | where !match(title,"\.mlmodel") | rename eai:acl.* as * | map [ | inputlookup $title$ | foreach * [ | eval b_<<FIELD>>=len(<<FIELD>>) + 1 ] | addtotals b_* fieldname=b | stats sum(eval(b/1024/1024)) as mb | eval name="$title$", owner="$owner$", app="$app$" ] maxsearches=1000
Hello, I'm a Korean beginner, Splunker index=my sourcetype=my2 sernder_ip=my3 | table _time | stats count by _time | sort - _time Here, even if the data is zero, I want to visualize the graph  ... See more...
Hello, I'm a Korean beginner, Splunker index=my sourcetype=my2 sernder_ip=my3 | table _time | stats count by _time | sort - _time Here, even if the data is zero, I want to visualize the graph    help me plz