Hi what you are meaning with "run a modular input cleanup"? If this means also remove checkpoint value then the ingestion starts from scratch (read: from being of existent events). At some TAs you c...
See more...
Hi what you are meaning with "run a modular input cleanup"? If this means also remove checkpoint value then the ingestion starts from scratch (read: from being of existent events). At some TAs you could see and copy checkpoint value e.g. to text file. After cleanup it may be possible to add old values back and continue from that point. But this is depending on TA. r. Ismo
We didn't put indexAndForward under the [tcpout] because the documentation says: * This setting is only available for heavy forwarders. But we also tried with this configuration and it didn't...
See more...
We didn't put indexAndForward under the [tcpout] because the documentation says: * This setting is only available for heavy forwarders. But we also tried with this configuration and it didn't work the same: [tcpout]
indexAndForward = true
defaultGroup=external_system
forwardedindex.3.blacklist = (_internal|_audit|_telemetry|_introspection)
[tcpout:external_system]
disabled=false
sendCookedData=false
server=<external_host>:<external_port> We applied this config by bundle push on the indexers. The main issue is that the restart never ends, as you can see from the attached picture. At least one indexer remains in a "pending" state. After apply this config, search factor and replication factor cannot be met and ALL the indexes were not fully searchable. Despite of the invalide state of the cluster, we saw coming data on the external system.
Hi @Siddharthnegi , in addition, remember to create a lookup definition [Settings > Lookups > Lookup Definition] otherwise you cannot fully use the lookup. Ciao. Giuseppe
Hi I’m expecting that those are two different events and with your sample query you get only the 1st event not both? If this is true, you must first implement a SPL query which result contains both ...
See more...
Hi I’m expecting that those are two different events and with your sample query you get only the 1st event not both? If this is true, you must first implement a SPL query which result contains both events. Then there must be something common on those events to connect those together. On your example events I can’t see anything like that! Probably you must find some other logs which you could use to combine all events on one transaction together? Only common factor between those event are D082923, but it seems to be part of file name or something? I assume that it’s using as this on many transactions and cannot use as identity only on transaction? r. Ismo
Hi your said that Tag2 can be “blank”, but what this blank actually means? Does it mean value which are empty or space or that this Tag didn’t exists? Only the last option means that you could use f...
See more...
Hi your said that Tag2 can be “blank”, but what this blank actually means? Does it mean value which are empty or space or that this Tag didn’t exists? Only the last option means that you could use functions isnull(Tag2) or isnotnull(Tag2). 1st and 2nd option means that Tag2 exists (isnotnull), but it hasn’t value or value is “ “. r. Ismo
Hi it’s just like @richgalloway said. Try to avoid to create any unnecessary indexes. There is upper limit of indexes both technical and usability point of view. I assume that you have big indexer ...
See more...
Hi it’s just like @richgalloway said. Try to avoid to create any unnecessary indexes. There is upper limit of indexes both technical and usability point of view. I assume that you have big indexer clusters in use and there are limit for max amount of buckets you could use some tenth millions I assume. I haven’t seen those limit since version 8 (in some conf presentation). If you really need that amount of indexes then you probably must create several indexer clusters to manage that amount of buckets. In that case I suggest you to contact your local splunk partner or Splunk’s PS service to update your architecture! r. Ismo
You could also use MC to look those. Just select MC -> Search -> Scheduler and there are couple of different dashboard. Then select suitable panel and open SPL for it and modify as needed.
Hi I don’t remember any other commonly used TA on HFs than the newest DB Connect which are requiring kvstore. Unfortunately that is not clearly said on documentation if I recall right? So without DB...
See more...
Hi I don’t remember any other commonly used TA on HFs than the newest DB Connect which are requiring kvstore. Unfortunately that is not clearly said on documentation if I recall right? So without DBX, you should disable kvstore on HF too. r. Ismo
Hi it's really weird that this and couple of other are not documented here. Maybe it's good to ask that from doc team? If I recall right those has "published" 7.3 (or 7.2 version)? At least these a...
See more...
Hi it's really weird that this and couple of other are not documented here. Maybe it's good to ask that from doc team? If I recall right those has "published" 7.3 (or 7.2 version)? At least these are there also without mention on that doc pages: splunk show-encrypted --value 'changeme' splunk hash-passwd changeme splunk gen-random-passwd splunk gen-cc-splunk-secret (see: https://docs.splunk.com/Documentation/Splunk/9.1.0/CommonCriteria/Commoncriteriainstallationandconfigurationoverview) Probably there are some other undocumented commands too. Some of those are used e.g. splunk-ansible scripts and there are other documentation on net by someone else than Splunk. r. Ismo
What @richgalloway is correct, but technically it's possible to format the return value so it can be used in the IN statement - your problem is that you are not crafting a subsearch - you're missing ...
See more...
What @richgalloway is correct, but technically it's possible to format the return value so it can be used in the IN statement - your problem is that you are not crafting a subsearch - you're missing the [] subsearch brackets - but you could do it like this - but you wouldn't really want to... index=syslog src_ip IN (
[
| tstats count from datamodel=Random by ips
| stats values(ips) as IP
``` You could technically do this, but it's not necessary
| eval IP = mvjoin(IP, ",")```
``` Use this return $ statement to return a space separated string
but you could technically use the mvjoin and have a comma separated one```
| return $IP
]
)
I suspect also that you did not post your message text field, as that rex statement would not produce the results you gave due to \D+ Can you post your message text field completely
Hi It's just like @ITWhisperer said. There must be some way. how you can combine those events which belongs to one transaction. With your current example there haven't been any information about tha...
See more...
Hi It's just like @ITWhisperer said. There must be some way. how you can combine those events which belongs to one transaction. With your current example there haven't been any information about that. When you can found some common information which are on all of those then you can you try e.g. @gcusello's way to combine those together. I assume that there could be outputs from several process on one or more nodes which generates those log events? If there is only one node and only one process at time, then you can use @gcusello's example as is. Best way to continue this is ask that developer add some unique transaction id (e.g uuidgen -> B49A0412-3EBB-4377-A026-D8E43EC9F7F1 different output on every run) on logs which we could use to combine transactions together. r. Ismo
I'm confused ... why have you not just done | eval "Sequence Number"=split('Message Text', ",")
| table Sequence Number as advised earlier? Substitute the actual field name for Message Text above
Hi If you have used those IP:s on alert's SPL then those are searchable by REST. But if you are looking a way to find those IP which are a result set for some alert then that could be quite hard. ...
See more...
Hi If you have used those IP:s on alert's SPL then those are searchable by REST. But if you are looking a way to find those IP which are a result set for some alert then that could be quite hard. With REST you could found all alerts and search commands what those are using. Then you can try to extract IPs from those search or lookups which those are used. Here is one SPL which I have used to get a list of all alerts and reports by SPL | rest /servicesNS/-/-/saved/searches splunk_server=local
| search disabled=0 AND is_scheduled=1
```| search NOT eai:acl.app IN (splunk_instrumentation splunk_rapid_diag splunk_archiver splunk_monitoring_console splunk_app_db_connect splunk_app_aws Splunk_TA_aws Splunk_ML_Toolkit )```
| rename "alert.track" as alert_track
| eval type=case(alert_track=1, "alert",
(isnotnull(actions) AND actions!="") AND (isnotnull(alert_threshold) AND alert_threshold!=""), "alert",
(isnotnull(alert_comparator) AND alert_comparator!="") AND (isnotnull(alert_type) AND alert_type!="always"), "alert",
true(), "report")
| fields title type eai:acl.app is_scheduled description search disabled triggered_alert_count actions action.script.filename alert.severity cron_schedule disabled
```| where type = "alert" ```
| dedup title eai:acl.app
| sort eai:acl.app title Just add ' | where type = "alert" ' to the end and you will get only alerts. Then continue with field search to look alert's SPL command etc.. r. Ismo