All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I made those changes and when I go to the webpage it prompts me for a pin then I get the following error after entering my cac pin: This XML file does not appear to have any style information associ... See more...
I made those changes and when I go to the webpage it prompts me for a pin then I get the following error after entering my cac pin: This XML file does not appear to have any style information associated with it. The document tree is shown below. <response> <messages> <msg type="ERROR">Unauthorized</msg> </messages> </response>
Good evening everyone, we have a problem in a Splunk cluster, composed of 3 indexers, 1 CM, 1 SH, 1 Deployer, 3 HF, 3 UF. The UFs receive logs from different Fortinet sources via syslog, and write ... See more...
Good evening everyone, we have a problem in a Splunk cluster, composed of 3 indexers, 1 CM, 1 SH, 1 Deployer, 3 HF, 3 UF. The UFs receive logs from different Fortinet sources via syslog, and write them to a specific path via rsyslog. Splunk_TA_fortinet_fortigate is installed on the forwarders. These logs must be saved to a specific index in Splunk, and a copy must be sent to two distinct destinations (third-party devices), in two different formats (customer needs). Since the formats are different (one of the two contains TIMESTAMP and HOSTNAME, the other does not), via rsyslog they are saved to two distinct paths applying two different templates. So far so good. The issues we have encountered are: - Some events are indexed twice in Splunk - Events sent to the customer do not always have a format that complies with the required ones For example, in one of the two cases the required format is the following: <PRI> date=2024-09-12 time=14:15:34 devname="device_name" ... But looking at the sent packets via tcpdump, some are correct, others are in the format <PRI> <IP_address> date=2024-09-12 time=14:15:34 devname="device_name" ... and more in the format <PRI> <timestamp> <IP_address> date=2024-09-12 time=14:15:34 devname="device_name" ... The outputs.conf file is as follow: [tcpout] defaultGroup = default-autolb-group [tcpout-server://indexer_1:9997] [tcpout-server://indexer_2:9997] [tcpout-server://indexer_3:9997] [tcpout:default-autolb-group] server = indexer_1:9997,indexer_2:9997,indexer_3:9997 disabled = false [syslog] [syslog:syslogGroup1] disabled = false server = destination_IP_1:514 type = udp syslogSourceType = fortigate [syslog:syslogGroup2] disabled = false server = destination_IP_2:514 type = udp syslogSourceType = fortigate priority = NO_PRI This is the props.conf: [fgt_log] TRANSFORMS-routing = syslogRouting [fortigate_traffic] TRANSFORMS-routing = syslogRouting [fortigate_event] TRANSFORMS-routing = syslogRouting and this is the trasforms.conf: [syslogRouting] REGEX=. DEST_KEY=_SYSLOG_ROUTING FORMAT=syslogGroup1,syslogGroup2 Any ideas? Thank you, Andrea  
Sometimes you can right-drag and then opt to copy without formatting.  If not, copy the field name and use Ctrl-Shift-V to paste it without the HTML.
Hope it did not mention partial search I share the concern regarding subsearches with you, maybe appendpipe could help ... This is a challenge for me, I should not update the lookup when the s... See more...
Hope it did not mention partial search I share the concern regarding subsearches with you, maybe appendpipe could help ... This is a challenge for me, I should not update the lookup when the search is seeing partial results. It happens very rarely, maybe doing it differently could help. Sorry, I hopped there is something in SPL that would tell me that the search results are kind of limited. require is a very good hint!  
Good day, I often run up against the issue of wanting to drag the text of a field name from the browser into a separate text editor.  Whenever I drag it, it works but it brings all the html metadata ... See more...
Good day, I often run up against the issue of wanting to drag the text of a field name from the browser into a separate text editor.  Whenever I drag it, it works but it brings all the html metadata with it.  Sometimes these field names are very long and so truncated on the screen its very tough without copying and pasting.   Has anyone found good work around for this?  Right now the field names, when dragged from the web browser into a text editor, comes through like this: https://fakebus.splunkcloud.com/en-US/app/search/search?q=search%20%60blocks%60&sid=1726153610.129675&display.page.search.mode=verbose&dispatch.sample_ratio=1&workload_pool=&earliest=-30m%40m&latest=now#https://fakebus.splunkcloud.com/en-US/app/search/search?q=search%20%60blocks%60&sid=1726153610.129675&display.page.search.mode=verbose&dispatch.sample_ratio=1&workload_pool=&earliest=-30m%40m&latest=now# Ironically dragged text field from splunk into this web dialog box work fine.
It's not clear what is meant by "partial search" and how Splunk is to know a search returned partial results or just fewer results. The subsearch idea likely won't work because subsearches execute b... See more...
It's not clear what is meant by "partial search" and how Splunk is to know a search returned partial results or just fewer results. The subsearch idea likely won't work because subsearches execute before the main search and so would be unable to detect errors in the main search. There is the require command that will abort a query if there are zero results.  That may not meet your requirements, however.
Hello, I'm not sure how to troubleshoot this at all.  So I've created a new Python based App thru the Add-On builder that is using a Collection Interval every 60 sec.  The App Input is set to 60 se... See more...
Hello, I'm not sure how to troubleshoot this at all.  So I've created a new Python based App thru the Add-On builder that is using a Collection Interval every 60 sec.  The App Input is set to 60 sec as well.  When I test the script which makes chained API calls that creates events based off of the last API call, it returns within 20 sec. The App would create about 50 events for each interval, when performing a Search, I would expect every 1 min to see about 50 events, but I'm seeing 6 or 7 per minute.   I ran the following query, and it's showing that the event time and index time are within ms.   source=netscaler| eval indexed_time=strftime(_indextime, "%Y-%m-%d %H:%M:%S") | eval event_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | table _raw event_time indexed_time When looking at the App log, I see it's only making the final API calls every 20 sec instead of all 50 of the final API calls within ms. Does anyone have any idea why this would occur and how I could resolve this lag that is occurring?   Thanks for your help, Tom    
Run it like a...  | makeresults | eval base_search="| makeresults | appendcols [| inputlookup ticket_templates where _key=5d433a4e10a7872f3a197e81 | stats max(*) as *]" | map search="| makeresul... See more...
Run it like a...  | makeresults | eval base_search="| makeresults | appendcols [| inputlookup ticket_templates where _key=5d433a4e10a7872f3a197e81 | stats max(*) as *]" | map search="| makeresults | map search="$base_search$  This would definitely work!!
Currently the above fix is only for Microsoft ADFS, but it is possible using Okta and F5 using the SAML configuration with the prompt being on the IdP side. What is your IdP?
Since Splunk 6.x we have been using a proxy server (Apache) with Splunk to pass the user's CAC credentials to Splunk.  Is it true that with 9.2.2, a proxy is no longer needed?  I'm also trying to ... See more...
Since Splunk 6.x we have been using a proxy server (Apache) with Splunk to pass the user's CAC credentials to Splunk.  Is it true that with 9.2.2, a proxy is no longer needed?  I'm also trying to implement CAC authentication following Configure Splunk Enterprise to use a common access card for authentication - Splunk Documentation and Configuring Splunk for Common Access Card (CAC) authentication - Splunk Lantern, but now getting the following error message: "This site can't be reached"
So just to be clear, this would not be a candidate for KV Store?
Assuming you have co-located the syslog-ng app install on the same server where you have located your Splunk HF then there are a few options available to you.  You can continue to create the stand al... See more...
Assuming you have co-located the syslog-ng app install on the same server where you have located your Splunk HF then there are a few options available to you.  You can continue to create the stand alone app to provide some experience and learning opportunities. Or... Most people when running syslog-ng will set the destination of syslog events to a file and separated by some sort of host details.  Since the file is local to the disk then you can leverage the HF web interface to set a Data Input monitor file option and it will guide you through common event breaking, line breaking, time extraction options, and some field extraction options.  Essentially all the things a stand alone app would do but easier to manage down the road by continuing to use the web interface. In a previous life I set the destination of the syslog-ng as a HEC receiver.  Which in your situation can be the local host HF or the IDX cluster, but that takes a bit of work and you already have a lot of development to write to file so maybe not the right idea for you.
From Splunk's perspective, no action is needed to stop using the software.  Your company, however, may have its own requirements, such as archiving the data before decommissioning the server.
This is to be used as coldToFrozenScript essentially I'm trying to avoid having multiple copies of my bucket. As per the guidance: "In an Indexer cluster, each individual peer node rolls its buckets... See more...
This is to be used as coldToFrozenScript essentially I'm trying to avoid having multiple copies of my bucket. As per the guidance: "In an Indexer cluster, each individual peer node rolls its buckets to frozen, in the same way that a non-clustered indexer does; that is, based on its own set of configurations. Because all peers in a cluster should be configured identically, all copies of a bucket should roll to frozen at approximately the same time."
Thanks a lot for your answer. You are right; however, I feel this doesn't scale up well when you have too many values since it will generate "Number of Web.Site unique values" * 2 columns.... which p... See more...
Thanks a lot for your answer. You are right; however, I feel this doesn't scale up well when you have too many values since it will generate "Number of Web.Site unique values" * 2 columns.... which pretty much breaks the browser. Additionally, now that I have columns with this pattern : WebSiteExample1.event_count_1week_before, ... etc. How could I write an expression like the one I had so I only filter the values where the difference between this week and the previous one is more than X percent?   I don't see an easy way to compare chunks of X minutes of data against the same time but 1 week ago. Timewrap seemed perfect for that use case but looks like it's designed only for graphical representation.  
Something is not quite right here Your regex string is missing some question marks (although they do appear to be in your error message!) Your error message says you have hit a limit with max_matc... See more...
Something is not quite right here Your regex string is missing some question marks (although they do appear to be in your error message!) Your error message says you have hit a limit with max_match, but your rex command doesn't appear to be using max_match and your sample log is a single line so even if you were using max_match there would only be one set of results Please can you clarify / expand your question
Let Splunk manage the buckets for you. https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Automatearchiving Currently your script doesn't seem to have any filter on cold and would appear to... See more...
Let Splunk manage the buckets for you. https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Automatearchiving Currently your script doesn't seem to have any filter on cold and would appear to copy all files so is this a one time execution as this on a cron would seem to copy over and over again creating a storage issue. Benefits of Splunk Management 1) Frozen reduces storage as the raw data in compressed format is stored, the IDX files are stripped away 2) Easy methods to reintroduce Frozen data back to thawed 3) Expands automatically if you have IDX clustering 4) Your not duplicating storage of Cold qualified time spans in a manual frozen folder 5) Folder management and creation is automatic
Hey All, Can anybody help me with optimization of this rex: | rex "#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*STEP:\s*(?P<STEP>[^,]+),\s*PKEY:\s*(?P<PKEY>.*?),\s*STATE:\s*(?P<STATE>[^,]+),\s*MSG0:\s*(?P<MSG0... See more...
Hey All, Can anybody help me with optimization of this rex: | rex "#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*STEP:\s*(?P<STEP>[^,]+),\s*PKEY:\s*(?P<PKEY>.*?),\s*STATE:\s*(?P<STATE>[^,]+),\s*MSG0:\s*(?P<MSG0>.*?),\s*EXCID:\s*(?P<EXCID>[a-zA-Z_]+),\s*PROPS:\s*(?P<PROPS>[^#]+)\s*#HLE#" Example log: "#HLS# IID: EB_FILE_S, STEP: SEND_TOF, PKEY: Ids:100063604006, 1000653604006, 6000125104001, 6000135104001, 6000145104001, 6000155104001, STATE: IN_PROGRESS, MSG0: Sending request to K, EXCID: dcd, PROPS: EVENT_TYPE: SEND_TO_S, asd: asd #HLE# ERROR: "Streamed search execute failed because: Error in 'rex' command: regex="#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*STEP:\s*(?P<STEP>[^,]+),\s*PKEY:\s*(?P<PKEY>.*?),\s*STATE:\s*(?P<STATE>[^,]+),\s*MSG0:\s*(?P<MSG0>.*?),\s*EXCID:\s*(?P<EXCID>[a-zA-Z_]+),\s*PROPS:\s*(?P<PROPS>[^#]+)\s*#HLE#" has exceeded configured match_limit, consider raising the value in limits.conf."
thank you for your response! Yes, what I am referring to is just no longer using SPLUNK at all, not just one indexer. The question was more in terms of any scripts or data that was not removed from S... See more...
thank you for your response! Yes, what I am referring to is just no longer using SPLUNK at all, not just one indexer. The question was more in terms of any scripts or data that was not removed from SPLUNK specifically before decommissioning the server. 
Splunkbase automatically archives apps that have not been updated in 24 months.  That does not mean the app no longer works or cannot be used.  However, as noted on the cited page, this app has been ... See more...
Splunkbase automatically archives apps that have not been updated in 24 months.  That does not mean the app no longer works or cannot be used.  However, as noted on the cited page, this app has been replaced by another. The functionality of this add-on has been incorporated into the supported Splunk Add-on for Microsoft Office 365 https://splunkbase.splunk.com/app/4055