Hi, I've got a problem with this playbook code block, the custom functions I try to execute seem to hang indefinitely, I also know the custom function works because I've successfully used it from a u...
See more...
Hi, I've got a problem with this playbook code block, the custom functions I try to execute seem to hang indefinitely, I also know the custom function works because I've successfully used it from a utility block I've tried a few different arrangements of this logic including initializing cfid with both the custom function calls and consolidating custom function names into a single while loop with the phantom.completed and have used pass instead of sleep. But the custom function doesn't seem to return/complete. Here's another example, which is basically the same except it consolidates the while loops and executes both the custom functions at the same time. Once either of these above scenarios (or something similar) are successful I need to get the results from the custom function executions (below pic), combine it into a single string and then send "data" to another function: > post_http_data(container=container, body=json.dumps({"text": data}) Any assistance would be great. Thanks.
Hi you can always ask that splunk split your enterprise license to 5 and 45GB license file. Then ask also 5GB ES license. Then just use separate LM where to put those two 5+5 files and use that for ...
See more...
Hi you can always ask that splunk split your enterprise license to 5 and 45GB license file. Then ask also 5GB ES license. Then just use separate LM where to put those two 5+5 files and use that for your SIEM instance. This will fulfill official requirements. r. Ismo
Hi rule of thumbs. Never restore anything into running system unless your product support it! If you have single instance where you have take that backup, then you should use separate dummy/empty i...
See more...
Hi rule of thumbs. Never restore anything into running system unless your product support it! If you have single instance where you have take that backup, then you should use separate dummy/empty instance where to restore it. I suppose that even that case you will have some issues with files e.g. hot buckets and buckets which has switch state from warm to cold or cold to frozen during your backup time. If you have used e.g. snapshot for backup then this is not so big issue. After restoration just switch this service up (change splunk node name or shutdown the primary instance first). If you have clustered environment then it’s much harder to get working backup and restore it. I really suggest that you use snapshots for backing up! You must take this at same time from all your indexers to get a consistent backup. I really like to empty test etc. environment for restoration! r. Ismo
Have UFs configured on several Domain Controllers that point to a Heavy Forwarder and that points to Splunk Cloud. Trying to configure Windows Event Logs. Application, System & DNS logs are working c...
See more...
Have UFs configured on several Domain Controllers that point to a Heavy Forwarder and that points to Splunk Cloud. Trying to configure Windows Event Logs. Application, System & DNS logs are working correctly, however, no Security logs for any of the DCs are working. Splunk service is running with a service account that has proper admin permissions. I have edited the DC GPO to allow the service account access to 'Manage auditing and security logs' I am at a lose here. Not sure what else to troubleshoot. Here is in inputs.conf file on each DC [WinEventLog://Application] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest index = wineventlog [WinEventLog://Security] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest index = wineventlog [WinEventLog://System] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest index = wineventlog [WinEventLog://DNS Server] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest index = wineventlog
The rex command can either extract fields from an event or replace text in an event. In this case, mode=sed tells it to replace text. The field=summary option restricts the command to the contents ...
See more...
The rex command can either extract fields from an event or replace text in an event. In this case, mode=sed tells it to replace text. The field=summary option restricts the command to the contents of the summary field. The quoted string is the sed command to execute. The 's' represents the substitute command. The part after the first slash is a regular expression. It says to look for the string "from '" followed by any number of additional characters (.*). The parentheses create a group we'll refer back to later. The part after the next slash is the replacement text. It puts the "from" back, adds a newline character (\n), then adds the remainder of the original text (the group from part 1). To read more about rex, see the Search Reference manual. https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/Rex
Hello, I've read the following documentation: https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Backupindexeddata https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Backupconfigurat...
See more...
Hello, I've read the following documentation: https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Backupindexeddata https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Backupconfigurations Basically to back up Splunk, I need to make a copy of "$SPLUNK_HOME/etc/*" and "$SPLUNK_HOME/var/lib/splunk/defaultdb/db/*" (after rotating the hot buckets.) My question is, how is this restored? Would I just paste the copied files back in to a working Splunk instance? Then the data can be searched normally? Thank you
Responding to this bc we recently received the same question at Community Office Hours. In case anyone else is looking to do this, here's the expert guidance: Option 1: Workload Management is your...
See more...
Responding to this bc we recently received the same question at Community Office Hours. In case anyone else is looking to do this, here's the expert guidance: Option 1: Workload Management is your friend here. A very easy implementation without messing with all the roles Workload Management examples (see scenarios 1 and 2) Configuring workload rules Option 2: Use Role Limitations to dictate the rule Follow guidance in: How to restrict usage of real-time search
Try something like this |rest /services/data/indexes
|rename title as index
|rex field=index "^foo_(?<appname>.+)"
|rex field=index "^foo_(?<appname>.+)_"
|table appname, index
|stats dc(appname) as...
See more...
Try something like this |rest /services/data/indexes
|rename title as index
|rex field=index "^foo_(?<appname>.+)"
|rex field=index "^foo_(?<appname>.+)_"
|table appname, index
|stats dc(appname) as count
|eval title = "currentapps"
| append
[| makeresults
| eval count = 300
| eval title="total_apps"]
| table title count
useACK = <boolean>
* Whether or not to use indexer acknowledgment.
* Indexer acknowledgment is an optional capability on forwarders that helps
prevent loss of data when sending data to an indexer. ...
See more...
useACK = <boolean>
* Whether or not to use indexer acknowledgment.
* Indexer acknowledgment is an optional capability on forwarders that helps
prevent loss of data when sending data to an indexer. the workaround means you don't need use the indexer aknowledgment, so you run the risk of losing data during an indexer restart. The solution is not suitable for me.
Also remember that the Trial license is granted to a particular party for a particular use (testing the solution to see if it's appropriate for intended purpose) and limited time. Attempts to "prolon...
See more...
Also remember that the Trial license is granted to a particular party for a particular use (testing the solution to see if it's appropriate for intended purpose) and limited time. Attempts to "prolong" the trial license by moving data to a fresh instance is against license terms. If you need a longer-time trial license, contact your local friendly Splunk Partner.
I wouldn't use words like "illegal", especially since legality may differ between countries but it all depends on your agreement with Splunk. By default you just buy a single Splunk Enterprise licens...
See more...
I wouldn't use words like "illegal", especially since legality may differ between countries but it all depends on your agreement with Splunk. By default you just buy a single Splunk Enterprise license for your organization and a Enterprise Security license which equals your SE size. If you have a specific need (like the necessity to have a two separate licenses because you have two completely unconnected sites which can't be handled by a single license manager), you have to talk with your local Partner/Splunk sales representative. This is a custom case and has to be treated as such. We can't know whether Splunk decides to grant such "license layout" or not.
Yes, I understand. But this is a relatively complicated thing. 1. Unless you have exclusive access enabled on your account, you have the possibility of two people connecting at the same time to the ...
See more...
Yes, I understand. But this is a relatively complicated thing. 1. Unless you have exclusive access enabled on your account, you have the possibility of two people connecting at the same time to the destination system. You can't distinguish between them. 2. Since the action form one search occurs before the action from another search, it's not as easy as just matching by time and host. I think I'd try to pull both searches into one result set then "populate down" the value from the PAS user to the windows events using streamstats. (or try to use transaction but using this command is generally not advised unless you have really no other option).