All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Real_captain , the mode=sed was not from the field extraction wizard. may i know why you thought to use the mode=sed, pls suggest.  As you can see,  Syntax: mode=sed Description: Specify to i... See more...
Hi @Real_captain , the mode=sed was not from the field extraction wizard. may i know why you thought to use the mode=sed, pls suggest.  As you can see,  Syntax: mode=sed Description: Specify to indicate that you are using a sed (UNIX stream editor) expression. sed-expression Syntax: "<string>" Description: When mode=sed, specify whether to replace strings (s) or substitute characters (y) in the matching regular expression.  https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Rex#Syntax not sure, but lets try:   | rex field=Message "(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH1>[^"]+)"     Sample log lines will be helpful to troubleshoot this, thanks.  
Thanks @richgalloway for your response. I tried with  | where hostname like hostname_pattern also | where hostname like hostname_pattern its not returning any search results.
I had same understanding, thanks for confirming that. I am asked to modify the raw event that we receive in JSON format to include new key value pair and to replace the value of one of the field valu... See more...
I had same understanding, thanks for confirming that. I am asked to modify the raw event that we receive in JSON format to include new key value pair and to replace the value of one of the field value for a specific key.
This was my last fall back option as I have multiple fields and the query would become lengthy. This also gives me the flexibility to add extra fields to the _raw event. I am just assuming that splun... See more...
This was my last fall back option as I have multiple fields and the query would become lengthy. This also gives me the flexibility to add extra fields to the _raw event. I am just assuming that splunk has some inbuilt solution that I might be missing
Ok. Unless you do something very very strange Splunk component should be reading and indexing or forwarding its own internal logs. That's why I asked about the internal logs. Your output from list mo... See more...
Ok. Unless you do something very very strange Splunk component should be reading and indexing or forwarding its own internal logs. That's why I asked about the internal logs. Your output from list monitor shows just that. And it's a normal thing. I asked how you checked whether you're getting the data or not because it's a fairly typical case when your source has misconfigured time settings (either the clock is not in sync or the timezone is wrongly set up) that the data is actually indexed but at the wrong point in time so when you're  searching for "last 15 minutes" or last few hour it doesn't show in search but the data is there. Just badly onboarded. Try searching for those "not working" hosts over a bigger time range (you could risk all-time especially if you do it with tstats) | tstats min(_time) max(_time) count where index=_internal host=<your_forwarder_> I'm assuming your data flow is UF->HF->idx, right? Windows UFs go through the same HFs as linux ones? Look for information about connection established to the downstream HF on UF's splunkd.log (or errors). If there are errors, look for corresponding errors/warnings on HF's side.
  From UF, ./splunk list monitor tcpdump from UF, checking traffic on HF's IP     Tcpdump from HF, checking traffic of UF's IP          
1. Infra - UF(Windows and Redhat8.10) and HF (Redhat 9.4) is in Azure. Logs are forwarded to Indexers (Remote - On prem). 2. Windows logs (UF) are received by Indexers. 3. Linux (UF) logs are not r... See more...
1. Infra - UF(Windows and Redhat8.10) and HF (Redhat 9.4) is in Azure. Logs are forwarded to Indexers (Remote - On prem). 2. Windows logs (UF) are received by Indexers. 3. Linux (UF) logs are not received by Indexers. 4. From Linux UF, ./splunk list monitor list all log name to be forwarded.     Established connection on port on both UF and HF IP address when checking using netstat -an 5. Continuous traffic observed going out from UF to HF (sync and ack on tcpdump) 6. Yes. What exactly to check in splunkd.log?   What commands to use to confirm if logs are forwarded from UF to HF. Then HF to Indexer?  
The fields can be extracted using the rex command or by using the split function (and perhaps others).   | eval FIELDS = split(FIELD1, "/") | rex field=FIELD1 max_match=0 "(?<FIELDS>[^\/]+)"   B... See more...
The fields can be extracted using the rex command or by using the split function (and perhaps others).   | eval FIELDS = split(FIELD1, "/") | rex field=FIELD1 max_match=0 "(?<FIELDS>[^\/]+)"   Both commands will extract the fields into a multi-value field so iyou'll need to assign them to separate fields. | foreach 1 2 3 4 5 6 7 [eval FIELD_<<FIELD>>=mvindex(FIELDS,<<FIELD>>-1)]  
ENOTENOUGHINFO But seriously. Firstly, what does your infrastructure look like? Secondly, do you get _any_ logs from any of your new hosts (including internal indexes)? Thirdly, how did you verify t... See more...
ENOTENOUGHINFO But seriously. Firstly, what does your infrastructure look like? Secondly, do you get _any_ logs from any of your new hosts (including internal indexes)? Thirdly, how did you verify that the data is not ingested? Fourthly, did you do any more troubleshooting or just the tcpdump? Fifthly, what do you see in your tcpdump output? Sixthly, did you check splunkd.log on involved hosts?
1. Those are slashes, not backslashes 2. Is the number of fields constant? If not, you can't use regex alone to split it into fields. 3. Isn't splitting the string with the eval split() function en... See more...
1. Those are slashes, not backslashes 2. Is the number of fields constant? If not, you can't use regex alone to split it into fields. 3. Isn't splitting the string with the eval split() function enough?
As a general task it's simply impossible. How are you supposed to know whether your results come from a search index=windows | stats count or | makeresults | eval count=10 | table count Ok, this ... See more...
As a general task it's simply impossible. How are you supposed to know whether your results come from a search index=windows | stats count or | makeresults | eval count=10 | table count Ok, this is an extreme example but should show my point fairly well - without a lot of assumptions you can't know what data the results came from. The main issue with your problem is not the tool (although you probably want something that has ready-made libraries to interface with Splunk so you don't have to reinvent the wheel). The main issue is the method you'd want to use to build such search. This is something you'd have to give the most consideration to.
Hi Team Can someone please help me to extract the backslash separated field into multiple fields ?  Example : Field is present in Splunk as below :  Field = ABCD/EFG6/CR/IN/OU/XY/BMW I need to us... See more...
Hi Team Can someone please help me to extract the backslash separated field into multiple fields ?  Example : Field is present in Splunk as below :  Field = ABCD/EFG6/CR/IN/OU/XY/BMW I need to use the rex command to extract the able field into 7 fields as below: Field_1 = ABCD Field_2 = EFG6 Field_3 = CR Field_4 = IN Field_5 = OU Field_6 = XY Field_7 = BMW   In case value of the file is below :  Field = ABCD  Then rex command generates the 7 fields as below :  Field_1 = ABCD Field_2 =   Field_3 =   Field_4 =   Field_5 =   Field_6 =   Field_7 =       
On the other hand - if you don't know which buckets come from which index... well, that's too bad. You're most probably gonna restore data from several indexes into a single one. As @richgalloway sai... See more...
On the other hand - if you don't know which buckets come from which index... well, that's too bad. You're most probably gonna restore data from several indexes into a single one. As @richgalloway said - frozen buckets (actually any buckets) don't have anything _inside_ them that would indicate which index they are from. It's where they are placed that decides which index they belong to.
I'd also assume that since you wanted hostname _pattern_ simple equality check won't do. In such case you should use match() or searchmatch() as your where condition. It's also worth pointing out th... See more...
I'd also assume that since you wanted hostname _pattern_ simple equality check won't do. In such case you should use match() or searchmatch() as your where condition. It's also worth pointing out that this search will most likely be more performance-intensive than it needs to be and might be better done differnetly.  
HI  Can someone please help me to extract the multiple fields from a single backslash separated field using rex command.  FIELD1 = ABCD/EFGH/IJ/KL/MN/OP/QRST How to create the multiple fields usin... See more...
HI  Can someone please help me to extract the multiple fields from a single backslash separated field using rex command.  FIELD1 = ABCD/EFGH/IJ/KL/MN/OP/QRST How to create the multiple fields using the field FIELD1 as below : Field_1 = ABCD  Field_2 = EFGH Field_3 = IJ Field_4 = KL Field_5 = MN Field_6 = OP Field_7 = QRST      
HI  Can someone please let me know how I can use the below expression (generated via Field Extraction) directly via Rex command:  Regular expression generated via Field extraction:   ^(?:[^,\... See more...
HI  Can someone please let me know how I can use the below expression (generated via Field Extraction) directly via Rex command:  Regular expression generated via Field extraction:   ^(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH>[^"]+)   I am using the rex command as below but i am getting an error :    | rex field=Message mode=sed "(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH1>[^"]+)"  
The search command doesn't accept a field name on both sides of an expression.  Use where, instead. index="index_1" | eval hostname_pattern=case( index == "index_1","*-hostname_1", index == "index... See more...
The search command doesn't accept a field name on both sides of an expression.  Use where, instead. index="index_1" | eval hostname_pattern=case( index == "index_1","*-hostname_1", index == "index_2","*-hostname_2" ) | where hostname= hostname_pattern  
Frozen data does not know from which index it came so it doesn't matter where you restore it.  Consider creating a "thawddata" index for it.  Of course, users will have to search that index as well a... See more...
Frozen data does not know from which index it came so it doesn't matter where you restore it.  Consider creating a "thawddata" index for it.  Of course, users will have to search that index as well as any live indexes.
Greetings , Does anyone know if it's possible to create a script that writes splunk search quey based on the alerts results / table, for example: "Multiple Failure Attempts"  uses "Authentication" ... See more...
Greetings , Does anyone know if it's possible to create a script that writes splunk search quey based on the alerts results / table, for example: "Multiple Failure Attempts"  uses "Authentication" data model to display results and only shows specific fields as : username , total failure attempts, source ip, destination..etc. But I want to conduct more investigation and check raw logs to see more fields so I have to write a new search query with specifying fields and their values to get all information. (index=* sourcetype=xxx user=xxx dest=xxx srcip=xxx) then look for more fields under the displayed results. And I would like to automate this process. Any suggestions for Apps, Scripts, recommended programming language?   
We have an issue where we created a default frozen folder instead of frozen for each index, now we have some data in our frozen folder and we want to resotre it back to searchable data. how can i ide... See more...
We have an issue where we created a default frozen folder instead of frozen for each index, now we have some data in our frozen folder and we want to resotre it back to searchable data. how can i identify the index name of that data or if i cant identify the index name how to restore it to a random index.