All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Team, I have log like this, File Records count is 2 File Records count is 5 File Records count is 45 File Records count is 23 and I have extracted the values 2,5,45,23 as a separate fiel... See more...
Hello Team, I have log like this, File Records count is 2 File Records count is 5 File Records count is 45 File Records count is 23 and I have extracted the values 2,5,45,23 as a separate field called Count. When I use "base search| table Count"  I am getting the expected value in a stats table But I want 2,5,45,23 to be plotted in the line graph. I tried stats commands but its only showing the no. of events of Count but not the values of count. Could you please provide your assistance on how can I plot the values of Count into a graph.
Yes that's right. Ok, I'll give that go. Thanks
Hi what you are meaning with "run a modular input cleanup"? If this means also remove checkpoint value then the ingestion starts from scratch (read: from being of existent events). At some TAs you c... See more...
Hi what you are meaning with "run a modular input cleanup"? If this means also remove checkpoint value then the ingestion starts from scratch (read: from being of existent events). At some TAs you could see and copy checkpoint value e.g. to text file. After cleanup it may be possible to add old values back and continue from that point. But this is depending on TA.  r. Ismo
We didn't put indexAndForward under the [tcpout] because the documentation says:   * This setting is only available for heavy forwarders.   But we also tried with this configuration and it didn't... See more...
We didn't put indexAndForward under the [tcpout] because the documentation says:   * This setting is only available for heavy forwarders.   But we also tried with this configuration and it didn't work the same:   [tcpout] indexAndForward = true defaultGroup=external_system forwardedindex.3.blacklist = (_internal|_audit|_telemetry|_introspection) [tcpout:external_system] disabled=false sendCookedData=false server=<external_host>:<external_port>   We applied this config by bundle push on the indexers. The main issue is that the restart never ends, as you can see from the attached picture. At least one indexer remains in a "pending" state. After apply this config, search factor and replication factor cannot be met and ALL the indexes were not fully searchable. Despite of the invalide state of the cluster, we saw coming data on the external system.  
Hi @Siddharthnegi , in addition, remember to create a lookup definition [Settings > Lookups > Lookup Definition] otherwise you cannot fully use the lookup. Ciao. Giuseppe
 We have a Splunk addon for AWS, under which we configured input cloud trail input type SQS-Based S3, Here we are not getting logs continuously. 
Hi I’m expecting that those are two different events and with your sample query you get only the 1st event not both? If this is true, you must first implement a SPL query which result contains both ... See more...
Hi I’m expecting that those are two different events and with your sample query you get only the 1st event not both? If this is true, you must first implement a SPL query which result contains both events. Then there must be something common on those events to connect those together. On your example events I can’t see anything like that! Probably you must find some other logs which you could use to combine all events on one transaction together? Only common factor between those event are D082923, but it seems to be part of file name or something? I assume that it’s using as this on many transactions and cannot use as identity only on transaction? r. Ismo
Hi your said that Tag2 can be “blank”, but what this blank actually means? Does it mean value which are empty or space or that this Tag didn’t exists? Only the last option means that you could use f... See more...
Hi your said that Tag2 can be “blank”, but what this blank actually means? Does it mean value which are empty or space or that this Tag didn’t exists? Only the last option means that you could use functions isnull(Tag2) or isnotnull(Tag2). 1st and 2nd option means that Tag2 exists (isnotnull), but it hasn’t value or value is “ “. r. Ismo
Hi or is this data structured like json and you have both INDEXED_EXTRACTION and KV_MODE defined? r. Ismo
Hi to help you, we must know more information about your environment (splunk + app) and how you are collecting those logs.  r. Ismo
Hi it’s just like @richgalloway said. Try to avoid to create any unnecessary indexes. There is upper limit of indexes both technical and usability point of view. I assume that you have big indexer ... See more...
Hi it’s just like @richgalloway said. Try to avoid to create any unnecessary indexes. There is upper limit of indexes both technical and usability point of view. I assume that you have big indexer clusters in use and there are limit for max amount of buckets you could use some tenth millions I assume. I haven’t seen those limit since version 8 (in some conf presentation). If you really need that amount of indexes then you probably must create several indexer clusters to manage that amount of buckets. In that case I suggest you to contact your local splunk partner or Splunk’s PS service to update your architecture! r. Ismo
You could also use MC to look those. Just select MC -> Search -> Scheduler and there are couple of different dashboard. Then select suitable panel and open SPL for it and modify as needed.
Thank you so much all for your inputs, we were able to get the data from another set of logs. Thank you so muchh!!
Hi I don’t remember any other commonly used TA on HFs than the newest DB Connect which are requiring kvstore. Unfortunately that is not clearly said on documentation if I recall right? So without DB... See more...
Hi I don’t remember any other commonly used TA on HFs than the newest DB Connect which are requiring kvstore. Unfortunately that is not clearly said on documentation if I recall right? So without DBX, you should disable kvstore on HF too. r. Ismo
Hi it's really weird that this and couple of other are not documented here. Maybe it's good to ask that from doc team? If I recall right those has "published" 7.3 (or 7.2 version)? At least these a... See more...
Hi it's really weird that this and couple of other are not documented here. Maybe it's good to ask that from doc team? If I recall right those has "published" 7.3 (or 7.2 version)? At least these are there also without mention on that doc pages: splunk show-encrypted --value 'changeme' splunk hash-passwd changeme splunk gen-random-passwd splunk gen-cc-splunk-secret (see: https://docs.splunk.com/Documentation/Splunk/9.1.0/CommonCriteria/Commoncriteriainstallationandconfigurationoverview) Probably there are some other undocumented commands too. Some of those are used e.g. splunk-ansible scripts and there are other documentation on net by someone else than Splunk. r. Ismo
What @richgalloway is correct, but technically it's possible to format the return value so it can be used in the IN statement - your problem is that you are not crafting a subsearch - you're missing ... See more...
What @richgalloway is correct, but technically it's possible to format the return value so it can be used in the IN statement - your problem is that you are not crafting a subsearch - you're missing the [] subsearch brackets  - but you could do it like this - but you wouldn't really want to... index=syslog src_ip IN ( [ | tstats count from datamodel=Random by ips | stats values(ips) as IP ``` You could technically do this, but it's not necessary | eval IP = mvjoin(IP, ",")``` ``` Use this return $ statement to return a space separated string but you could technically use the mvjoin and have a comma separated one``` | return $IP ] )  
Hi All, We are want the run a modular input cleanup. What happens to the checkpoints? Will ingest from start from beginning again?   Thanks Nick
Interesting, it does look like you can't use a token as an attribute value in XML. Not sure if that can be changed
I suspect also that you did not post your message text field, as that rex statement would not produce the results you gave due to \D+ Can you post your message text field completely
Hi It's just like @ITWhisperer said. There must be some way. how you can combine those events which belongs to one transaction. With your current example there haven't been any information about tha... See more...
Hi It's just like @ITWhisperer said. There must be some way. how you can combine those events which belongs to one transaction. With your current example there haven't been any information about that. When you can found some common information which are on all of those then you can you try e.g. @gcusello's  way to combine those together. I assume that there could be outputs from several process on one or more nodes which generates those log events? If there is only one node and only one process at time, then you can use @gcusello's example as is. Best way to continue this is ask that developer add some unique transaction id (e.g uuidgen -> B49A0412-3EBB-4377-A026-D8E43EC9F7F1 different output on every run) on logs which we could use to combine transactions together. r. Ismo