All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I recently started trying to set up some field extracts for a few of our events.  In this case, the logs are pipe delimited and contain only a few segments.  What I've found that most of these attemp... See more...
I recently started trying to set up some field extracts for a few of our events.  In this case, the logs are pipe delimited and contain only a few segments.  What I've found that most of these attempts result in an error with rex regarding limits in limits.conf. For example: this record: 2022-02-03 11:45:21,732 |xxxxxxxxxxxxxxx.xxxxxx.com~220130042312|<== conn[SSL/TLS]=274107 op=26810 MsgID=26810 SearchResult {resultCode=0, matchedDN=null, errorMessage=null} ### nEntries=1 ### etime=3 ### When I attempt to use a pipe delimited field extract (for testing) the result is this error: When I toss this regex (from the error) into regex101 (https://regex101.com/r/IswlNh/1) it tells me it requires 2473 steps, which is well above the default 1000 for depth_limit...  How is it that an event with 4 segments delimited by pipe is so bad? I realize there are 2 limits (depth_count/match_count) in play here and I can increase them, but nowhere can I find recommended values to use as a sanity check.  I also realize I can optimize the regex, but as I am setting this up via UI using the delimited option, I don't have access to the regex at creation time.  Not to mention, many of my users are using this option as they are not regex gurus... So my big challenge/question is...  Where do I go from here?  My users are going to use this delimited options, which evidently generates some seriously inefficient regex under the covers.  Do I increase my limit(s), and if so what is a sane/safe value?  Is there something I'm missing? Thanks!
Hello Splunkers, I have a question with building Splunk Apps with Dashboard Studio. My question has to do with portability of the Splunk app. Given that the traditional way of building Splunk ap... See more...
Hello Splunkers, I have a question with building Splunk Apps with Dashboard Studio. My question has to do with portability of the Splunk app. Given that the traditional way of building Splunk apps via Simple XML allows you to save images in the Static folder inside your Splunk app.  So whenever you download the App on Splunkbase you have everything you need. Unlike Dashboard Studio that saves your images & Icons in the KV store.  With this in mind, how would you package your Splunk App that uses dashboard studio without losing any pictures or icons?  Thank you, Marco
Hi Experts, I'm trying to set up SAML SSO for Splunk Cloud against an external IDP. I've loaded the IDP's SAML metadata into the Splunk SAML configuration. The metadata includes a self-signed signi... See more...
Hi Experts, I'm trying to set up SAML SSO for Splunk Cloud against an external IDP. I've loaded the IDP's SAML metadata into the Splunk SAML configuration. The metadata includes a self-signed signing certificate. All the SAML authentication redirects are working, but Splunk complains that it could not verify the Assertion signature using the signing certificate. What am I missing? Does Splunk Cloud not really support self-signed IDP signing certificates?
I am trying to match a directory path including the string "\Users" but Splunk is throwing an error:   | rex field=TargetFilename "C:\\Users\\\w+\\AppData\\(?<File_Dir>.*)\."   Error in 'rex' com... See more...
I am trying to match a directory path including the string "\Users" but Splunk is throwing an error:   | rex field=TargetFilename "C:\\Users\\\w+\\AppData\\(?<File_Dir>.*)\."   Error in 'rex' command: Encountered the following error while compiling the regex 'C:\Users\\w+\AppData\(?<File_Dir>.*)\.': Regex: PCRE does not support \L, \l, \N{name}, \U, or \u. How can I literally match the path?
We have a standalone Splunk Enterprise environment running Splunk 8.2.x.   We have loaded the Splunk Add-on for SolarWinds  (latest version -- just downloaded it about two weeks ago).    We are tryin... See more...
We have a standalone Splunk Enterprise environment running Splunk 8.2.x.   We have loaded the Splunk Add-on for SolarWinds  (latest version -- just downloaded it about two weeks ago).    We are trying to get all three SolarWinds inputs (Alerts, Query, Inventory) to work in the Splunk Add-on for SolarWinds.  The Query and Inventory Inputs work fine but the Alerts are not working  (we are getting no data returned even though SolarWinds is producing alerts on its console).  My questions are these: 1.  Has anyone else experienced this problem and found a solution? 2.  Does anyone know which logs in either Splunk or SolarWinds that we can look at to help debug this issue? Thanks for your help.
Hi We have the Custom Radar Visualization App installed. Latest version 1.1.1 The Splunk Upgrade Readiness App is saying that it is not compatible with jQuery 3.5. However the app page on Splunkbas... See more...
Hi We have the Custom Radar Visualization App installed. Latest version 1.1.1 The Splunk Upgrade Readiness App is saying that it is not compatible with jQuery 3.5. However the app page on Splunkbase states its compatible with Splunk Enterprise 8.1 and 8.2. Does anybody else use the Readiness App and why would it say its not compatible? Thanks in advance Andy
I would like to view html webpage which is located under one of the Splunk apps local directory. I have created dashboard to embed this html webpage.   Whereas I am unable to get contents from ... See more...
I would like to view html webpage which is located under one of the Splunk apps local directory. I have created dashboard to embed this html webpage.   Whereas I am unable to get contents from this webpage as shown in attachment, please assist.          
Hello, So the requirement was to find gaps of data unavailability(start time & end time)  in the  given time range, condition is that if specific weekday have event in  a certain period (say first w... See more...
Hello, So the requirement was to find gaps of data unavailability(start time & end time)  in the  given time range, condition is that if specific weekday have event in  a certain period (say first week of Sunday) and in the same period if other week of same weekday(say Second week of Sunday)  does not have an event then my search still have to consider of having an event during Second Sunday too for calculating duration of data unavailability.
When running a search for"EventCode=35" OR "EventCode=36" OR "EventCode=37" OR "EventCode=38" source="WinEventLog:System" and then exporting that to a CVS file, column N through BN has the following ... See more...
When running a search for"EventCode=35" OR "EventCode=36" OR "EventCode=37" OR "EventCode=38" source="WinEventLog:System" and then exporting that to a CVS file, column N through BN has the following message as its column header, where <workstation> is the workstation name: If_this_is_the_first_occurrence_of_this_event_for_the_specified_computer_and_account__this_may_be_a_transient_issue_that_doesn_t_require_any_action_at_this_time___If_this_is_a_Read_Only_Domain_Controller_and__<workstation>___is_a_legitimate_machine_account_for_the_computer__<workstation>__then__<workstation>__should_be_marked_cacheable_for_this_location_if_appropriate_or_otherwise_ensure_connectivity_to_a_domain_controller__capable_of_servicing_the_request__for_example_a_writable_domain_controller____Otherwise__the_following_steps_may_be_taken_to_resolve_this_problem I am told by the AD team that at leaset Column N should  be a date. Starting at column CC and going through EC, I am seeing this as the header: Otherwise__assuming_that__<workstation>___is_not_a_legitimate_account__the_following_action_should_be_taken_on__<workstation>_ These headers are no where to be found when the search brings back data. they only show up when exporting to CSV.  Anyone have any idea what is going on? ______ Edited to add: I also want to point out this only happen when searching in Smart or Verbose Mode.  it does not happen in Fast mode.
https://docs.splunk.com/Documentation/Splunk/8.1.2/RESTTUT/RESTsearches i can see through search API provided by Splunk enterprise(on premise) , we can query the log data collected  by Splunk  via en... See more...
https://docs.splunk.com/Documentation/Splunk/8.1.2/RESTTUT/RESTsearches i can see through search API provided by Splunk enterprise(on premise) , we can query the log data collected  by Splunk  via endpoints.  Is there any way we can inject or query  data from one running Splunk enterprise to another through data from search API? Is there any configuration available for this use case ?
Hello, So the requirement was to find gaps of data unavailability(start time & end time)  in the  given time range, condition is that if specific weekday have event in  a certain period (say first w... See more...
Hello, So the requirement was to find gaps of data unavailability(start time & end time)  in the  given time range, condition is that if specific weekday have event in  a certain period (say first week of Sunday) and in the same period if other week of same weekday(say Second week of Sunday)  does not have an event then my search still have to consider of having an event during Second Sunday too for calculating duration of data unavailability.
i have Multiple event forwardings enabled on my Phantom App for Splunk that use saved searches to trigger notable events to phantom. I had recently we upgraded the App from ver 4.0.35 to 4.1.73. Wi... See more...
i have Multiple event forwardings enabled on my Phantom App for Splunk that use saved searches to trigger notable events to phantom. I had recently we upgraded the App from ver 4.0.35 to 4.1.73. With this upgrade all field mappings that were saved in the Event forwardings (locally) were erased. and now there are 0 fields that are mapped in the event forwardings. Since almost all the mapped fields in each of the Event forwardings were same, i have re-mapped them manually on one of the event forwarding  and while saving it i have checked the "Save Mappings" option which saves those fields in the global mappings. but now the mappings dont work for all the event forwardings as it should (due to global mapping) but only works for the single event forwarding where the mapping is saved locally. Troubleshooting done: 1. Tried to restore the phantom.conf file - did not work, no mapping was detected after the restore. 2. tried to clone the single event forwarding with locally mapped fields - did not work, as soon as i change the Saved search setting (as shown in screenshot) and save, the mapped fields turns to 0 (this is in the cloned event forwarding), as shown in the 2nd screenshot.   I really dont want to manually map the fields locally since it would result in close to 2000 fields in total (300 fields on each of the 7 event forwardings). Any help on this issue is really appreciated. This is the sample event forwarding config page,  the field mappings gets reset after saving the event forwarding. I am on splunk 8.1.5
Hi , I want to create summary index for the below OS metrics process . How to achieve this.  1.Avg CPU per week*  2.Avg memory per week*  3. Avg /var/log/ % used, per week*  4. # processes run... See more...
Hi , I want to create summary index for the below OS metrics process . How to achieve this.  1.Avg CPU per week*  2.Avg memory per week*  3. Avg /var/log/ % used, per week*  4. # processes running, per week* Thanks  
Hi Experts,                         I wondered the best way of comparing the below data.  So I have a query which returns as so . index=myindex sourcetype=mysourcetype host="myhost" |table process,... See more...
Hi Experts,                         I wondered the best way of comparing the below data.  So I have a query which returns as so . index=myindex sourcetype=mysourcetype host="myhost" |table process, tier, country This returns a 100 or so processes their tier and country as expected.  There is only 4 countries  uk, usa, denmark and spain It returns something like this  process              tier              country process1          roman         uk process2          roman         usa Process3         roman          Denmark process4         anglo            uk process5       anglo              usa process6       anglo             Denmark process7       anglo             spain The roman tier should be present in each country . If Spain is missing as above how to I only show the missing entry for spain as the outlier ? This is basically for a rec purpose so we can see whats missing. thanks in advance !     
I am coming across an interesting problem where notables are being generated for each event in Splunk with unique notable IDs, despite trigger conditions being set and notables being set to "Trigger ... See more...
I am coming across an interesting problem where notables are being generated for each event in Splunk with unique notable IDs, despite trigger conditions being set and notables being set to "Trigger Once". For example, if we are looking for 5 or more failed user login events over the span of 10 minutes, we will receive 5 notable alerts in our queue for each event, despite having counts  set to >=5 and other trigger conditions set, and having trigger set to "once" as opposed to "for each result". It seems like even when the trigger is set to "once", it is still behaving as if it is set to "for each result" Is there a way to set trigger conditions using SPL itself so that only one notable event is generated for a query that yields multiple results so we will only receive one notable?
Hello,  I have the following issue. I have a Search A, that yields me the state of a device. I would like to supplement the state by the information of the command, that leaded to the State A. There... See more...
Hello,  I have the following issue. I have a Search A, that yields me the state of a device. I would like to supplement the state by the information of the command, that leaded to the State A. Therefore I am looking to get the last command of Search B with the same device ID, that is before my Event in Search A.  In order to do this, I have used a left join.     index="IndexA" sourcetype="SourceA" .....|eval time=_time| table time ID state |join type=left left=A right=B usetime=true earlier=true where A.ID=B.ID [search index="IndexA" sourcetype="SourceB" ...|eval time=_time| table time ID command| sort _time-] |timediff='A.time'-'B.time'      Now I have the following issues: Is there a direct way to access the internal field _time? (Using 'A._time' doesn't work, which is why I am saving it in an own field named time) Somehow, if I don't use table at the end of the search command, i cannot access the value of time in the subsearch B using 'B.time' . What is the reason for this? And most important: I am getting results of the subsearch B, which are newer than my Event in the Search A. Is this because I used sort inside the subsearch? Thanks and best Regards
Hello everyone! I'm looking for assistance with fine-tuning Enterprise Security. I've been working hard with configuring ES to start generating notable events.  We're getting lots of events! Almos... See more...
Hello everyone! I'm looking for assistance with fine-tuning Enterprise Security. I've been working hard with configuring ES to start generating notable events.  We're getting lots of events! Almost 73k Access Notables, 66 Endpoint Notables, 2.4k Network Notables, 0 Identity Notables, 11 Audit Notables, and 3.8k Threat Notables.  What does a typical fine-tuning entail? Finding out what is a false-positive and modifying the correlation searches to ignore certain criteria? What else could I be missing? 
Is there any way we can inject data to one running Splunk enterprise(on premise) to another through search API? I can find the configured search APIs for Splunk (https://docs.splunk.com/Documentation... See more...
Is there any way we can inject data to one running Splunk enterprise(on premise) to another through search API? I can find the configured search APIs for Splunk (https://docs.splunk.com/Documentation/Splunk/8.1.2/RESTTUT/RESTsearches) , But searching for a way to inject data through these endpoints without using forwarder .Is this possible? 
Hi everyone, Im using Splunk APP https://splunkbase.splunk.com/app/1546/ I want to split a single JSON array event to multiple events by the word"addrRef". if you see my JSON array example and a... See more...
Hi everyone, Im using Splunk APP https://splunkbase.splunk.com/app/1546/ I want to split a single JSON array event to multiple events by the word"addrRef". if you see my JSON array example and at the end I wrote Response handler which is not working but is not sending any error either when I look at my "_internal" logs. If anyone could tell me why my reponse handler is not working or what Im doing wrong? Best regards   Json Array { [-] result: { [-] ipamRecords: [ [-] { [-] addrRef: IPAMRecords/248211 address: 10.1.1.20 claimed: false customProperties: { [+] } device: dhcpLeases: [ [+] ] dhcpReservations: [ [+] ] discoveryType: ARP dnsHosts: [ [+] ] extraneousPTR: false interface: lastDiscoveryDate: Feb 3, 2022 08:11:04 lastKnownClientIdentifier: AB:BA:CA:FF:EA:66 lastSeenDate: Feb 3, 2022 07:55:17 ptrStatus: OK state: Assigned usage: 25140 } { [-] addrRef: IPAMRecords/357310 address: 10.2.2.21 claimed: false customProperties: { [+] } device: dhcpLeases: [ [+] ] dhcpReservations: [ [+] ] discoveryType: Ping dnsHosts: [ [+] ] extraneousPTR: false interface: lastDiscoveryDate: Feb 2, 2022 13:40:17 lastKnownClientIdentifier: BA:BB:AA:B5:28:AC lastSeenDate: Nov 3, 2017 17:07:34 ptrStatus: OK state: Assigned usage: 24596 } { [+] } { [+] } { [+] } { [+] } { [+] } ] totalResults: 7 } } MY RESPONSE HANDLER NOT WORKING, BUT NOT GIVING ANY ERROR ON  "index=_interna host=Myhost": vi /opt/splun/etc/apps/rest_ta/bin/responsehandlers.py class MenAndMiceHandler: def __init__(self,**args): pass def __call__(self, response_object,raw_response_output,response_type,req_args,endpoint): if response_type == "json": output = json.loads(raw_response_output) for addrRef in output: print_xml_stream(json.dumps(addrRef)) else: print_xml_stream(raw_response_output)  
Hello, I have set up a saas trial account and followed the steps end to end for the below : Windows 2019 Server -  Agent runs successfully and connects to the saas instance but no metrics showing... See more...
Hello, I have set up a saas trial account and followed the steps end to end for the below : Windows 2019 Server -  Agent runs successfully and connects to the saas instance but no metrics showing.  Windows 10 Desktop -  Agent runs successfully and connects to the saas instance but no metrics showing. Ubuntu - Agent runs successfully and connects to the saas instance but no metrics showing. machineagent-bundle-64bit-windows-22.1.0.3252 machineagent-bundle-64bit-linux-22.1.0.3252