Hi @rphillips_splk, While I know this is some time ago, I still find this very interesting! You used 2 different routing types here - so I need to ask if this also could be applied on 2 TCP (differ...
See more...
Hi @rphillips_splk, While I know this is some time ago, I still find this very interesting! You used 2 different routing types here - so I need to ask if this also could be applied on 2 TCP (different) connections, so the cloned version could also be send as TCP and not syslog? Moreover - I'm in the situation, there will be an additional HF where you show the "syslog receiver" above, and the actual indexer - so basically route like this: UF -> HF (clone) -> IDX |-> HF -> IDX Can this be done as smooth as above? If so, how?
Hi You could forward also to Splunk as S2S traffic. This should be enough for that on your indexers outputs.conf [tcpout]
indexAndForward=true
[tcpout:<Your server name or something]
server=<tar...
See more...
Hi You could forward also to Splunk as S2S traffic. This should be enough for that on your indexers outputs.conf [tcpout]
indexAndForward=true
[tcpout:<Your server name or something]
server=<target server ip>:<used port like 9997 for s2s>
# other parameter what you want to use like blacklist Then you should remember that it that connection didn't work then your indexing in local node will be stopped after remote queue is full! r. Ismo
Hi, In some of the Dashboards of my Splunk Monitoring Console, it returns this error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details" ...
See more...
Hi, In some of the Dashboards of my Splunk Monitoring Console, it returns this error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details" Error in Dashboard: The console in the developer details specifies a 404 Not Found Error on many scripts: The same error is issued for also other js like PopTart.js or Base.js. Searching for all these files in the Splunk folder I notice these scritpts are all stored in a folder called quarantined_files, an odd folder placed directly in the /opt/splunk/ path. Any ideas on how to debug this error?
Hi @Devi13, probably the Counts are strings, so did you tried to convert them in numbers using eval tonumber (https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/ConversionFunctions#t...
See more...
Hi @Devi13, probably the Counts are strings, so did you tried to convert them in numbers using eval tonumber (https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/ConversionFunctions#tonumber.28.26lt.3Bstr.26gt.3B.2C.26lt.3Bbase.26gt.3B.29)? base search
| eval Count=tonumber(Count)
| table Count Ciao. Giuseppe
Hello Team, I have log like this, File Records count is 2 File Records count is 5 File Records count is 45 File Records count is 23 and I have extracted the values 2,5,45,23 as a separate fiel...
See more...
Hello Team, I have log like this, File Records count is 2 File Records count is 5 File Records count is 45 File Records count is 23 and I have extracted the values 2,5,45,23 as a separate field called Count. When I use "base search| table Count" I am getting the expected value in a stats table But I want 2,5,45,23 to be plotted in the line graph. I tried stats commands but its only showing the no. of events of Count but not the values of count. Could you please provide your assistance on how can I plot the values of Count into a graph.
Hi what you are meaning with "run a modular input cleanup"? If this means also remove checkpoint value then the ingestion starts from scratch (read: from being of existent events). At some TAs you c...
See more...
Hi what you are meaning with "run a modular input cleanup"? If this means also remove checkpoint value then the ingestion starts from scratch (read: from being of existent events). At some TAs you could see and copy checkpoint value e.g. to text file. After cleanup it may be possible to add old values back and continue from that point. But this is depending on TA. r. Ismo
We didn't put indexAndForward under the [tcpout] because the documentation says: * This setting is only available for heavy forwarders. But we also tried with this configuration and it didn't...
See more...
We didn't put indexAndForward under the [tcpout] because the documentation says: * This setting is only available for heavy forwarders. But we also tried with this configuration and it didn't work the same: [tcpout]
indexAndForward = true
defaultGroup=external_system
forwardedindex.3.blacklist = (_internal|_audit|_telemetry|_introspection)
[tcpout:external_system]
disabled=false
sendCookedData=false
server=<external_host>:<external_port> We applied this config by bundle push on the indexers. The main issue is that the restart never ends, as you can see from the attached picture. At least one indexer remains in a "pending" state. After apply this config, search factor and replication factor cannot be met and ALL the indexes were not fully searchable. Despite of the invalide state of the cluster, we saw coming data on the external system.
Hi @Siddharthnegi , in addition, remember to create a lookup definition [Settings > Lookups > Lookup Definition] otherwise you cannot fully use the lookup. Ciao. Giuseppe
Hi I’m expecting that those are two different events and with your sample query you get only the 1st event not both? If this is true, you must first implement a SPL query which result contains both ...
See more...
Hi I’m expecting that those are two different events and with your sample query you get only the 1st event not both? If this is true, you must first implement a SPL query which result contains both events. Then there must be something common on those events to connect those together. On your example events I can’t see anything like that! Probably you must find some other logs which you could use to combine all events on one transaction together? Only common factor between those event are D082923, but it seems to be part of file name or something? I assume that it’s using as this on many transactions and cannot use as identity only on transaction? r. Ismo
Hi your said that Tag2 can be “blank”, but what this blank actually means? Does it mean value which are empty or space or that this Tag didn’t exists? Only the last option means that you could use f...
See more...
Hi your said that Tag2 can be “blank”, but what this blank actually means? Does it mean value which are empty or space or that this Tag didn’t exists? Only the last option means that you could use functions isnull(Tag2) or isnotnull(Tag2). 1st and 2nd option means that Tag2 exists (isnotnull), but it hasn’t value or value is “ “. r. Ismo
Hi it’s just like @richgalloway said. Try to avoid to create any unnecessary indexes. There is upper limit of indexes both technical and usability point of view. I assume that you have big indexer ...
See more...
Hi it’s just like @richgalloway said. Try to avoid to create any unnecessary indexes. There is upper limit of indexes both technical and usability point of view. I assume that you have big indexer clusters in use and there are limit for max amount of buckets you could use some tenth millions I assume. I haven’t seen those limit since version 8 (in some conf presentation). If you really need that amount of indexes then you probably must create several indexer clusters to manage that amount of buckets. In that case I suggest you to contact your local splunk partner or Splunk’s PS service to update your architecture! r. Ismo
You could also use MC to look those. Just select MC -> Search -> Scheduler and there are couple of different dashboard. Then select suitable panel and open SPL for it and modify as needed.
Hi I don’t remember any other commonly used TA on HFs than the newest DB Connect which are requiring kvstore. Unfortunately that is not clearly said on documentation if I recall right? So without DB...
See more...
Hi I don’t remember any other commonly used TA on HFs than the newest DB Connect which are requiring kvstore. Unfortunately that is not clearly said on documentation if I recall right? So without DBX, you should disable kvstore on HF too. r. Ismo
Hi it's really weird that this and couple of other are not documented here. Maybe it's good to ask that from doc team? If I recall right those has "published" 7.3 (or 7.2 version)? At least these a...
See more...
Hi it's really weird that this and couple of other are not documented here. Maybe it's good to ask that from doc team? If I recall right those has "published" 7.3 (or 7.2 version)? At least these are there also without mention on that doc pages: splunk show-encrypted --value 'changeme' splunk hash-passwd changeme splunk gen-random-passwd splunk gen-cc-splunk-secret (see: https://docs.splunk.com/Documentation/Splunk/9.1.0/CommonCriteria/Commoncriteriainstallationandconfigurationoverview) Probably there are some other undocumented commands too. Some of those are used e.g. splunk-ansible scripts and there are other documentation on net by someone else than Splunk. r. Ismo
What @richgalloway is correct, but technically it's possible to format the return value so it can be used in the IN statement - your problem is that you are not crafting a subsearch - you're missing ...
See more...
What @richgalloway is correct, but technically it's possible to format the return value so it can be used in the IN statement - your problem is that you are not crafting a subsearch - you're missing the [] subsearch brackets - but you could do it like this - but you wouldn't really want to... index=syslog src_ip IN (
[
| tstats count from datamodel=Random by ips
| stats values(ips) as IP
``` You could technically do this, but it's not necessary
| eval IP = mvjoin(IP, ",")```
``` Use this return $ statement to return a space separated string
but you could technically use the mvjoin and have a comma separated one```
| return $IP
]
)