All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The existing rex command is searching for 3 digits following "rest", which does not match the sample text.  Try this command | rex "\\\"(?<method>\w+) \/rest\/(?<ActionTaken>.*)"  
I have a standard printed statement that shows something like this: [29/Aug/2024:23:59:48 +0000] "GET /rest/LMNOP [29/Aug/2024:23:59:48 +0000] "POST /rest/LMNOP [29/Aug/2024:23:59:48 +0000] "PUT... See more...
I have a standard printed statement that shows something like this: [29/Aug/2024:23:59:48 +0000] "GET /rest/LMNOP [29/Aug/2024:23:59:48 +0000] "POST /rest/LMNOP [29/Aug/2024:23:59:48 +0000] "PUT /rest/LMNOP [29/Aug/2024:23:59:48 +0000] "DELETE /rest/LMNOP I don't have a defined field called  "ActionTaken" in the sense, was the user doing a put, post or get etc.. Is there a simple regex that would give me something to add to a query that would define a variable called  "ActionTaken" tried this: rex "\//rest/s*(?<ActionTaken>\d{3})"  But it comes back with nothing 
I  am trying to use a lookup of "known good" filenames that are within FTP transfer logs, to add extra data to files that are found in the logs, but also need to show  when files are not found in the... See more...
I  am trying to use a lookup of "known good" filenames that are within FTP transfer logs, to add extra data to files that are found in the logs, but also need to show  when files are not found in the logs, but expected. The lookup has a lookup definition defined, so that FileName can contain wildcards, and this works for matching the wildcarded filename to  existing events, with other SPL. lookup definition with wildcard on FileName for csv: FTP-Out FileName Type Direction weekday File1.txt fixedfilename Out monday File73*.txt variablefilename Out thursday File95*.txt variablefilename Out friday   example events: 8/30/24 9:30:14.000AM FTPFileName=File1.txt Status=Success Size=14kb 8/30/24 9:35:26.000AM FTPFileName=File73AABBCC.txt Status=Success Size=15kb 8/30/24 9:40:11.000AM FTPFileName=File73XXYYZZ.txt Status=Success Size=23kb 8/30/24 9:45:24.000AM FTPFileName=garbage.txt Status=Success Size=1kb current search (simplified): | inputlookup FTP-Out | join type=left FileName [ search index=ftp_logs sourcetype=log:ftp | rename FTPFileName as FileName] results I get: 8/30/24 9:30:14.000AM File1.txt fixedfilename Out monday Success 14kb File73*.txt variablefilename Out thursday File95*.txt variablefilename Out friday desired output: 8/30/24 9:30:14.000AM File1.txt fixedfilename Out monday Success 14kb 8/30/24 9:35:26.000AM File73AABBCC.txt variablefilename Out thursday Success 15kb 8/30/24 9:40:11.000AM File73XXYYZZ.txt variablefilename Out thursday Success 23kb File95*.txt variablefilename Out friday Essentially I want the full filename and results for anything the wildcard in the lookup matches, but also show any time the wildcard filename in the lookup doesn't match an event in the  search window. I've tried various other queries with append/appendcols and transaction and the closest I've gotten  so far is still with the left join, however that doesn't appear to join with wildcarded  fields from a lookup. It also doesn't seem that the where  clause with a join off a lookup  supports like() I'm hoping that someone else might have an idea on how I can get the  matched files as well as missing files in  an  output similar to my desired output above. This is within a splunkcloud deployment not  enterprise.
This is now documented on dev.splunk.com: https://dev.splunk.com/enterprise/reference/modinputs/modinputsmanagerxml/ (sorry for the thread necromancy but I like to leave notes for others who stumble... See more...
This is now documented on dev.splunk.com: https://dev.splunk.com/enterprise/reference/modinputs/modinputsmanagerxml/ (sorry for the thread necromancy but I like to leave notes for others who stumble across the same questions)
THANK YOU!
I've never understood why Spunk doesn't just log the offending .csv file.  
When you use indexed extractions, the events are parsed on the UF and are not touched on subsequent components (with some exceptions which we're not getting into here). So your props on indexers do ... See more...
When you use indexed extractions, the events are parsed on the UF and are not touched on subsequent components (with some exceptions which we're not getting into here). So your props on indexers do not have any effect on parsing. You're interested in TIMESTAMP_FIELDS (along with TIMESTAMP_FORMAT of course) on the UF.
On a Dashboard Studio dashboard I have a dropdown input and a rectangle that can be clicked. When the rectangle is clicked, the token value of the dropdown input token should be changed to a specifi... See more...
On a Dashboard Studio dashboard I have a dropdown input and a rectangle that can be clicked. When the rectangle is clicked, the token value of the dropdown input token should be changed to a specified value. Is that possible in Dashboard Studio?
It actually depends on your network environment and your configuration. The general answer is - you must have network connectivity between the environments and the proper traffic must be allowed on ... See more...
It actually depends on your network environment and your configuration. The general answer is - you must have network connectivity between the environments and the proper traffic must be allowed on local OS-level firewall  In this aspect SOAR and Splunk Enterprise are not different from any other network services - you must have an ability to connect to a port to be able to use it as simple as that. So it's not that I'm trying to be rude or something, it's just that there are so many variables here that it'd be better if you engaged your local network/linux guru to help you because that's something that's not Splunk-specific and local help will be much more responsive than ping-ponging stuff over internet forum.
Splunk has a multi-day class on how to get data into Splunk so I won't be able to cover the whole subject here. Every sourcetype ingested into Splunk should have a props.conf stanza that specifies a... See more...
Splunk has a multi-day class on how to get data into Splunk so I won't be able to cover the whole subject here. Every sourcetype ingested into Splunk should have a props.conf stanza that specifies at the "Great Eight" settings.  They are: SHOULD_LINEMERGE LINE_BREAKER TIME_PREFIX TIME_FORMAT MAX_TIMESTAMP_LOOKAHEAD TRUNCATE EVENT_BREAKER_ENABLE EVENT_BREAKER You can read about each of these in the Admin Manual. Often, the time zone associated with an event is included in the timestamp (for example, "8/30/2024 10:52:00Z" or "8/30/2024 10:52:00-0700").  When that's the case, adding "%Z" or "%z" to the TIME_FORMAT setting is all you need.  If the timestamp does not include zone information then adding the TZ setting to props.conf will help. TZ = America/Los_Angeles  
Is there an option in Dashboard Studio to set/reset a token that was previously set by a "Click Event" to a new value when a specific search in the Dashboard has finished running?   Just to clarify... See more...
Is there an option in Dashboard Studio to set/reset a token that was previously set by a "Click Event" to a new value when a specific search in the Dashboard has finished running?   Just to clarify: I know that I can access tokens from the search with $search name:result.<field>$ or other tokens like $search name:job.done$. What I would need is to set a token when a search is done    Example: Token "tok_example" has the default value 0 With the click on a button (click event) in the dashboard the token "tok_example" is set to value 1 This (the value 1 of the token "tok_example") triggers a search in the dashboard to run After the search is finished, I want to set the token "tok_example" back to it's original value 0 (without any additional interaction by the user with the dashboard)   Step 4 is the part I don't know how to do in Dashboard Studio. Is there a solution for that?
Installing add-ons is not enough to populate a datamodel.  You must have indexed data that matches what the datamodel looks for and is tagged appropriately. None of the listed SE content uses a macr... See more...
Installing add-ons is not enough to populate a datamodel.  You must have indexed data that matches what the datamodel looks for and is tagged appropriately. None of the listed SE content uses a macro called `summariesonly_config`.  Creating one is likely to be the easiest way around this error.  I would set the definition to 'summariesonly=true'.
Yes I have checked, but cannot find any hint on how to achieve that in Dashboard Studio. (Sorry for the late rersponse)
Have you eliminated app permissions/shares at the user level?  The saved search may run as user or as a default account(noone). The btool has a switch for "-- user" which I'm not 100% familiar with ... See more...
Have you eliminated app permissions/shares at the user level?  The saved search may run as user or as a default account(noone). The btool has a switch for "-- user" which I'm not 100% familiar with but the docs do warn about having to use the switch "--app" as well, but then says the switch user does not consider knowledge object permissions when evaluating the user.
I found out that the message about duplicated configuration is from ES (Enterprise Security). This check has a period of 10 minutes. apps/SplunkEnterpriseSecuritySuite/bin/configuration_checks/confc... See more...
I found out that the message about duplicated configuration is from ES (Enterprise Security). This check has a period of 10 minutes. apps/SplunkEnterpriseSecuritySuite/bin/configuration_checks/confcheck_es_correlationmigration.py:MSG_DUPLICATED_STANZA = 'Configuration file settings can be duplicated in multiple applications: stanza="%s" conf_type="%s" apps="%s"' I tested the above scenario on a clean Splunk Enterprise without ES and the behavior matches the documentation, but btool not. [splunk@siemsearch01 apps]$ cat ATest_app/default/savedsearches.conf [ss] search = `super_macro` atest [splunk@siemsearch01 apps]$ cat Test_app/default/savedsearches.conf [ss] search = `super_macro` test request.ui_dispatch_app = search disabled = 0 alert.track = 0 cron_schedule = */2 * * * * dispatch.earliest_time = -4m dispatch.latest_time = -2m enableSched = 1 [splunk@siemsearch01 apps]$ cat ZTest_app/default/savedsearches.conf [ss] search = `super_macro` ztest [splunk@siemsearch01 apps]$ [splunk@siemsearch01 apps]$ /opt/splunk/bin/splunk btool savedsearches list --debug ss | grep "search =" /opt/splunk/etc/apps/ATest_app/default/savedsearches.conf search = `super_macro` atest btool returns:  search = `super_macro` atest Gui returns:  index=ztest ztest, i.e. `super_macro` ztest    
I am using Splunk Enterprise trial version. and i am able to see live events coming from Paloalto to Splunk. but when I am selecting last 30 min logs in splunk its not showing anything when I am sele... See more...
I am using Splunk Enterprise trial version. and i am able to see live events coming from Paloalto to Splunk. but when I am selecting last 30 min logs in splunk its not showing anything when I am selecting all time then its showing the latest events as well.  I am assuming this can be due to timezone issue. I tried to change the Palotalto time Zone but that also didn't work.  as per you solution using props.conf. can you please help me what need to be change there. my paloalto time Zone is US/Pecific/.  I am completely new to splunk. @richgalloway 
What version of the app are you using?  Does the vulnerability tool report a CVE?  What is it?
Your settings look good to me. The first problem may be with the documentation.  Submit feedback on the docs page telling them that btool doesn't match the documentation and they should update the d... See more...
Your settings look good to me. The first problem may be with the documentation.  Submit feedback on the docs page telling them that btool doesn't match the documentation and they should update the docs. I'm not sure what can be done about the second problem other than ignoring it.
Splunk Cloud operates in the UTC time zones.  Data could come in from any of 23+ other time zones so trying to get them to match is futile. The correct process is to tell Splunk what time zone the d... See more...
Splunk Cloud operates in the UTC time zones.  Data could come in from any of 23+ other time zones so trying to get them to match is futile. The correct process is to tell Splunk what time zone the data is from and let it adjust it to the system time.  Do that using props.conf.  The best method to use depends on the data itself.  See the Admin Manual's description of the TZ setting for more information. The algorithm for determining the time zone for a particular event is as follows: * If the event has a timezone in its raw text (for example, UTC, -08:00), use that. * If TZ is set to a valid timezone string, use that. * If the event was forwarded, and the forwarder-indexer connection uses the version 6.0 and higher forwarding protocol, use the timezone provided by the forwarder. * Otherwise, use the timezone of the system that is running splunkd.
In a word: nothing. Ingestion will not stop.  Unless you're grossly over the contracted amount, Splunk is unlikely to contact you. At license renewal time, however, Splunk may recommend a higher li... See more...
In a word: nothing. Ingestion will not stop.  Unless you're grossly over the contracted amount, Splunk is unlikely to contact you. At license renewal time, however, Splunk may recommend a higher limit (at higher cost, no doubt).