All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi folks, What query can I use to sum up my field "viewer.Id" to see how many viewers we have between 01/22/2022 and 02/02/2022. I would like to see the count of Increment/decrement from my results ... See more...
Hi folks, What query can I use to sum up my field "viewer.Id" to see how many viewers we have between 01/22/2022 and 02/02/2022. I would like to see the count of Increment/decrement from my results and also in % by comparing it with different dates. Thanks Evans
Hello,  I need  a role that only can create users and roles. I selected capabilities as admin_all_objects, edit_user and edit_roles. I don´t have any problems to create users, but when I try to cre... See more...
Hello,  I need  a role that only can create users and roles. I selected capabilities as admin_all_objects, edit_user and edit_roles. I don´t have any problems to create users, but when I try to create roles from this role the button create is inactive preveting adding the new role. Any help is really apreciated Regards, TGMAna
I have an issue with my splunk forwarder.  Inside the inputs.conf, the interval is set to run at 5 9 * * * . So 09:05 daily if I did it correctly.  I restart splunk service as splunk.  The job will ... See more...
I have an issue with my splunk forwarder.  Inside the inputs.conf, the interval is set to run at 5 9 * * * . So 09:05 daily if I did it correctly.  I restart splunk service as splunk.  The job will run ONE time at 09:05. The only way I can get it to run as scheduled is if I put the interval in as seconds 86400.  Does anyone else have that problem?
So I'm trying to setup REST API calls with Add-on Builder and it requires two params: 'fromDate' and 'toDate'. So I ran into 2 problems: 1)  'toDate' in my case is time/date now (at the moment of AP... See more...
So I'm trying to setup REST API calls with Add-on Builder and it requires two params: 'fromDate' and 'toDate'. So I ran into 2 problems: 1)  'toDate' in my case is time/date now (at the moment of API call). Is it possible to set up this param to something like Date.now() in JavaScript?  2) 'fromDate' should be a checkpoint taken from the last record in the last response. The problem is that in the response this timestamp is in UNIX format. And for request, it should be UTC %d/%m/%y%H%M. How can I convert UNIX into UTC? + can I add an additional second to this extracted timestamp so my data won't overlap? 
Hi All, my solution foresees a heavy forwarder that sends data to an indexer, in the transforms.conf file I have a regex, which allows me to filter through string only the lines I need regex exampl... See more...
Hi All, my solution foresees a heavy forwarder that sends data to an indexer, in the transforms.conf file I have a regex, which allows me to filter through string only the lines I need regex example REGEX = ^.*(?:SIMONE|MARCO).* file.log example to monitor xxxxx|xxxxx|xxxxx|SIMONE|xxxxx|xxxxx|xxxxx xxxxx|xxxxx|xxxxx|VALERIO|xxxxx|xxxxx|xxxxx xxxxx|xxxxx|xxxxx|SILVIA|xxxxx|xxxxx|xxxxx xxxxx|xxxxx|xxxxx|MARCO|xxxxx|xxxxx|xxxxx I am acknowledging these errors ERROR Regex - Failed in pcre_exec: Error PCRE_ERROR_MATCHLIMIT for regex: WARN regexExtractionProcessor - Regular expression for stanza xxxxx exceeded configured PCRE match limit. One or more fields might not have their values extracted, which can lead to incorrect search results. Fix the regular expression to improve search performance or increase the MATCH_LIMIT in props.conf to include the missing field extractions. I wanted to know if there is a different way to filter the data to send without regex? Best Regards, Simone
I would like to group URL fields and get a total count.  When  I do this:       index=example source=example_example dest="*.amazonaws.com" OR dest="*.amazoncognito.com" OR dest="slack.com" OR d... See more...
I would like to group URL fields and get a total count.  When  I do this:       index=example source=example_example dest="*.amazonaws.com" OR dest="*.amazoncognito.com" OR dest="slack.com" OR dest="*.docker.io" | dedup dest | table dest | stats count by dest       the output is this: dest count 352532535.abc.def.eu-xxxxx-1.amazonaws.com 1 abc.auth.xx-aaaa-1.amazoncognito.com 1 aaa1-stage-login-abcdef.auth.xx-abcd-1.amazoncognito.com 1 346345452.abc.def.us-abcd-2.amazonaws.com 1 autoscaling.xx-east-4.amazonaws.com 1 slack.com 1 registry-1.docker.io 1 auth.docker.io 1   I wanted to group them by similar patterns like this: gruopedURL count .amazonaws.com 3 .amazoncognito.com 2 slack.com 1 .docker.io 2   I've tried other possible queries based on some postings here, but no luck. It was mostly after the '.com'
I was investigating bundle sizes coming from one of my SHC and came across several apps in the bundle that had the following in the lookup directory. Qualys is just one example there are several othe... See more...
I was investigating bundle sizes coming from one of my SHC and came across several apps in the bundle that had the following in the lookup directory. Qualys is just one example there are several other apps where index.default and index.alive are present. Can someone tell me what these are and what they're doing in a knowledge bundle. qualys_kb.csv_1534282613.index.default qualys_kb.csv_1643803241.755269.cs.index.alive
How do I set the visibility of panels in Dashboard Studio?   I was going to create a multi select option but how can I tie visibility of a panel to the selection being made?
I work in a large Splunk, ES clustered environment. Should the KVSTORES only be running on the SHs? Looks like after upg. to 8.2.4, we have to use WiredTiger kvstore. Do you have any input on what ti... See more...
I work in a large Splunk, ES clustered environment. Should the KVSTORES only be running on the SHs? Looks like after upg. to 8.2.4, we have to use WiredTiger kvstore. Do you have any input on what tier should the kvstores need to be running on? Any input on using WiredTiger kvstore please. I appreciate your response in advance.
It looks like this particular example (Table with Data Bars) does not work for me. Is there anything particular I should check ? Is it a bug ? I'm using Splunk 8.2.1 and the latest Dashboard Examples... See more...
It looks like this particular example (Table with Data Bars) does not work for me. Is there anything particular I should check ? Is it a bug ? I'm using Splunk 8.2.1 and the latest Dashboard Examples app (8.2.2)
I recently started trying to set up some field extracts for a few of our events.  In this case, the logs are pipe delimited and contain only a few segments.  What I've found that most of these attemp... See more...
I recently started trying to set up some field extracts for a few of our events.  In this case, the logs are pipe delimited and contain only a few segments.  What I've found that most of these attempts result in an error with rex regarding limits in limits.conf. For example: this record: 2022-02-03 11:45:21,732 |xxxxxxxxxxxxxxx.xxxxxx.com~220130042312|<== conn[SSL/TLS]=274107 op=26810 MsgID=26810 SearchResult {resultCode=0, matchedDN=null, errorMessage=null} ### nEntries=1 ### etime=3 ### When I attempt to use a pipe delimited field extract (for testing) the result is this error: When I toss this regex (from the error) into regex101 (https://regex101.com/r/IswlNh/1) it tells me it requires 2473 steps, which is well above the default 1000 for depth_limit...  How is it that an event with 4 segments delimited by pipe is so bad? I realize there are 2 limits (depth_count/match_count) in play here and I can increase them, but nowhere can I find recommended values to use as a sanity check.  I also realize I can optimize the regex, but as I am setting this up via UI using the delimited option, I don't have access to the regex at creation time.  Not to mention, many of my users are using this option as they are not regex gurus... So my big challenge/question is...  Where do I go from here?  My users are going to use this delimited options, which evidently generates some seriously inefficient regex under the covers.  Do I increase my limit(s), and if so what is a sane/safe value?  Is there something I'm missing? Thanks!
Hello Splunkers, I have a question with building Splunk Apps with Dashboard Studio. My question has to do with portability of the Splunk app. Given that the traditional way of building Splunk ap... See more...
Hello Splunkers, I have a question with building Splunk Apps with Dashboard Studio. My question has to do with portability of the Splunk app. Given that the traditional way of building Splunk apps via Simple XML allows you to save images in the Static folder inside your Splunk app.  So whenever you download the App on Splunkbase you have everything you need. Unlike Dashboard Studio that saves your images & Icons in the KV store.  With this in mind, how would you package your Splunk App that uses dashboard studio without losing any pictures or icons?  Thank you, Marco
Hi Experts, I'm trying to set up SAML SSO for Splunk Cloud against an external IDP. I've loaded the IDP's SAML metadata into the Splunk SAML configuration. The metadata includes a self-signed signi... See more...
Hi Experts, I'm trying to set up SAML SSO for Splunk Cloud against an external IDP. I've loaded the IDP's SAML metadata into the Splunk SAML configuration. The metadata includes a self-signed signing certificate. All the SAML authentication redirects are working, but Splunk complains that it could not verify the Assertion signature using the signing certificate. What am I missing? Does Splunk Cloud not really support self-signed IDP signing certificates?
I am trying to match a directory path including the string "\Users" but Splunk is throwing an error:   | rex field=TargetFilename "C:\\Users\\\w+\\AppData\\(?<File_Dir>.*)\."   Error in 'rex' com... See more...
I am trying to match a directory path including the string "\Users" but Splunk is throwing an error:   | rex field=TargetFilename "C:\\Users\\\w+\\AppData\\(?<File_Dir>.*)\."   Error in 'rex' command: Encountered the following error while compiling the regex 'C:\Users\\w+\AppData\(?<File_Dir>.*)\.': Regex: PCRE does not support \L, \l, \N{name}, \U, or \u. How can I literally match the path?
We have a standalone Splunk Enterprise environment running Splunk 8.2.x.   We have loaded the Splunk Add-on for SolarWinds  (latest version -- just downloaded it about two weeks ago).    We are tryin... See more...
We have a standalone Splunk Enterprise environment running Splunk 8.2.x.   We have loaded the Splunk Add-on for SolarWinds  (latest version -- just downloaded it about two weeks ago).    We are trying to get all three SolarWinds inputs (Alerts, Query, Inventory) to work in the Splunk Add-on for SolarWinds.  The Query and Inventory Inputs work fine but the Alerts are not working  (we are getting no data returned even though SolarWinds is producing alerts on its console).  My questions are these: 1.  Has anyone else experienced this problem and found a solution? 2.  Does anyone know which logs in either Splunk or SolarWinds that we can look at to help debug this issue? Thanks for your help.
Hi We have the Custom Radar Visualization App installed. Latest version 1.1.1 The Splunk Upgrade Readiness App is saying that it is not compatible with jQuery 3.5. However the app page on Splunkbas... See more...
Hi We have the Custom Radar Visualization App installed. Latest version 1.1.1 The Splunk Upgrade Readiness App is saying that it is not compatible with jQuery 3.5. However the app page on Splunkbase states its compatible with Splunk Enterprise 8.1 and 8.2. Does anybody else use the Readiness App and why would it say its not compatible? Thanks in advance Andy
I would like to view html webpage which is located under one of the Splunk apps local directory. I have created dashboard to embed this html webpage.   Whereas I am unable to get contents from ... See more...
I would like to view html webpage which is located under one of the Splunk apps local directory. I have created dashboard to embed this html webpage.   Whereas I am unable to get contents from this webpage as shown in attachment, please assist.          
Hello, So the requirement was to find gaps of data unavailability(start time & end time)  in the  given time range, condition is that if specific weekday have event in  a certain period (say first w... See more...
Hello, So the requirement was to find gaps of data unavailability(start time & end time)  in the  given time range, condition is that if specific weekday have event in  a certain period (say first week of Sunday) and in the same period if other week of same weekday(say Second week of Sunday)  does not have an event then my search still have to consider of having an event during Second Sunday too for calculating duration of data unavailability.
When running a search for"EventCode=35" OR "EventCode=36" OR "EventCode=37" OR "EventCode=38" source="WinEventLog:System" and then exporting that to a CVS file, column N through BN has the following ... See more...
When running a search for"EventCode=35" OR "EventCode=36" OR "EventCode=37" OR "EventCode=38" source="WinEventLog:System" and then exporting that to a CVS file, column N through BN has the following message as its column header, where <workstation> is the workstation name: If_this_is_the_first_occurrence_of_this_event_for_the_specified_computer_and_account__this_may_be_a_transient_issue_that_doesn_t_require_any_action_at_this_time___If_this_is_a_Read_Only_Domain_Controller_and__<workstation>___is_a_legitimate_machine_account_for_the_computer__<workstation>__then__<workstation>__should_be_marked_cacheable_for_this_location_if_appropriate_or_otherwise_ensure_connectivity_to_a_domain_controller__capable_of_servicing_the_request__for_example_a_writable_domain_controller____Otherwise__the_following_steps_may_be_taken_to_resolve_this_problem I am told by the AD team that at leaset Column N should  be a date. Starting at column CC and going through EC, I am seeing this as the header: Otherwise__assuming_that__<workstation>___is_not_a_legitimate_account__the_following_action_should_be_taken_on__<workstation>_ These headers are no where to be found when the search brings back data. they only show up when exporting to CSV.  Anyone have any idea what is going on? ______ Edited to add: I also want to point out this only happen when searching in Smart or Verbose Mode.  it does not happen in Fast mode.
https://docs.splunk.com/Documentation/Splunk/8.1.2/RESTTUT/RESTsearches i can see through search API provided by Splunk enterprise(on premise) , we can query the log data collected  by Splunk  via en... See more...
https://docs.splunk.com/Documentation/Splunk/8.1.2/RESTTUT/RESTsearches i can see through search API provided by Splunk enterprise(on premise) , we can query the log data collected  by Splunk  via endpoints.  Is there any way we can inject or query  data from one running Splunk enterprise to another through data from search API? Is there any configuration available for this use case ?