All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

1. Those are slashes, not backslashes 2. Is the number of fields constant? If not, you can't use regex alone to split it into fields. 3. Isn't splitting the string with the eval split() function en... See more...
1. Those are slashes, not backslashes 2. Is the number of fields constant? If not, you can't use regex alone to split it into fields. 3. Isn't splitting the string with the eval split() function enough?
As a general task it's simply impossible. How are you supposed to know whether your results come from a search index=windows | stats count or | makeresults | eval count=10 | table count Ok, this ... See more...
As a general task it's simply impossible. How are you supposed to know whether your results come from a search index=windows | stats count or | makeresults | eval count=10 | table count Ok, this is an extreme example but should show my point fairly well - without a lot of assumptions you can't know what data the results came from. The main issue with your problem is not the tool (although you probably want something that has ready-made libraries to interface with Splunk so you don't have to reinvent the wheel). The main issue is the method you'd want to use to build such search. This is something you'd have to give the most consideration to.
Hi Team Can someone please help me to extract the backslash separated field into multiple fields ?  Example : Field is present in Splunk as below :  Field = ABCD/EFG6/CR/IN/OU/XY/BMW I need to us... See more...
Hi Team Can someone please help me to extract the backslash separated field into multiple fields ?  Example : Field is present in Splunk as below :  Field = ABCD/EFG6/CR/IN/OU/XY/BMW I need to use the rex command to extract the able field into 7 fields as below: Field_1 = ABCD Field_2 = EFG6 Field_3 = CR Field_4 = IN Field_5 = OU Field_6 = XY Field_7 = BMW   In case value of the file is below :  Field = ABCD  Then rex command generates the 7 fields as below :  Field_1 = ABCD Field_2 =   Field_3 =   Field_4 =   Field_5 =   Field_6 =   Field_7 =       
On the other hand - if you don't know which buckets come from which index... well, that's too bad. You're most probably gonna restore data from several indexes into a single one. As @richgalloway sai... See more...
On the other hand - if you don't know which buckets come from which index... well, that's too bad. You're most probably gonna restore data from several indexes into a single one. As @richgalloway said - frozen buckets (actually any buckets) don't have anything _inside_ them that would indicate which index they are from. It's where they are placed that decides which index they belong to.
I'd also assume that since you wanted hostname _pattern_ simple equality check won't do. In such case you should use match() or searchmatch() as your where condition. It's also worth pointing out th... See more...
I'd also assume that since you wanted hostname _pattern_ simple equality check won't do. In such case you should use match() or searchmatch() as your where condition. It's also worth pointing out that this search will most likely be more performance-intensive than it needs to be and might be better done differnetly.  
HI  Can someone please help me to extract the multiple fields from a single backslash separated field using rex command.  FIELD1 = ABCD/EFGH/IJ/KL/MN/OP/QRST How to create the multiple fields usin... See more...
HI  Can someone please help me to extract the multiple fields from a single backslash separated field using rex command.  FIELD1 = ABCD/EFGH/IJ/KL/MN/OP/QRST How to create the multiple fields using the field FIELD1 as below : Field_1 = ABCD  Field_2 = EFGH Field_3 = IJ Field_4 = KL Field_5 = MN Field_6 = OP Field_7 = QRST      
HI  Can someone please let me know how I can use the below expression (generated via Field Extraction) directly via Rex command:  Regular expression generated via Field extraction:   ^(?:[^,\... See more...
HI  Can someone please let me know how I can use the below expression (generated via Field Extraction) directly via Rex command:  Regular expression generated via Field extraction:   ^(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH>[^"]+)   I am using the rex command as below but i am getting an error :    | rex field=Message mode=sed "(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH1>[^"]+)"  
The search command doesn't accept a field name on both sides of an expression.  Use where, instead. index="index_1" | eval hostname_pattern=case( index == "index_1","*-hostname_1", index == "index... See more...
The search command doesn't accept a field name on both sides of an expression.  Use where, instead. index="index_1" | eval hostname_pattern=case( index == "index_1","*-hostname_1", index == "index_2","*-hostname_2" ) | where hostname= hostname_pattern  
Frozen data does not know from which index it came so it doesn't matter where you restore it.  Consider creating a "thawddata" index for it.  Of course, users will have to search that index as well a... See more...
Frozen data does not know from which index it came so it doesn't matter where you restore it.  Consider creating a "thawddata" index for it.  Of course, users will have to search that index as well as any live indexes.
Greetings , Does anyone know if it's possible to create a script that writes splunk search quey based on the alerts results / table, for example: "Multiple Failure Attempts"  uses "Authentication" ... See more...
Greetings , Does anyone know if it's possible to create a script that writes splunk search quey based on the alerts results / table, for example: "Multiple Failure Attempts"  uses "Authentication" data model to display results and only shows specific fields as : username , total failure attempts, source ip, destination..etc. But I want to conduct more investigation and check raw logs to see more fields so I have to write a new search query with specifying fields and their values to get all information. (index=* sourcetype=xxx user=xxx dest=xxx srcip=xxx) then look for more fields under the displayed results. And I would like to automate this process. Any suggestions for Apps, Scripts, recommended programming language?   
We have an issue where we created a default frozen folder instead of frozen for each index, now we have some data in our frozen folder and we want to resotre it back to searchable data. how can i ide... See more...
We have an issue where we created a default frozen folder instead of frozen for each index, now we have some data in our frozen folder and we want to resotre it back to searchable data. how can i identify the index name of that data or if i cant identify the index name how to restore it to a random index.
Hello Everyone, I have following splunk query, which I am trying to build for dropdown in dashboard. Basically 2 dropdowns, the 1st dropdown has got static value which is index names:  index_1 , ind... See more...
Hello Everyone, I have following splunk query, which I am trying to build for dropdown in dashboard. Basically 2 dropdowns, the 1st dropdown has got static value which is index names:  index_1 , index_2 , index_3 Based on the selected index,  I am trying to run the splunk query:   index="index_1" | eval hostname_pattern=case( index == "index_1","*-hostname_1", index == "index_2","*-hostname_2" ) | search hostname= hostname_pattern   the search always return empty. However if I run the direct query for index_1 or index_2 with its relevant hostname, it works and returns me results   index="index_1" | search hostname= "*-hostname_1"    For the sake of checking if my condition is working or not, I fed the output of eval case into table. And checked by passing relevant indexes (index_1 or index_2)   index="index_1" | eval hostname_pattern=case( index == "index_1","*-hostname_1", index == "index_2","*-hostname_2" ) | stats count by hostname_pattern | table hostname_pattern | sort hostname_pattern   returns *-hostname_1 Not sure how do we pass the hostname value based on selected index for search. Highly appreciate your help.
Hi Guys I have issue for the newly setup HF and UF. The windows UF’s logs are reaching the Indexers while the Linux UF are not. Communication is ok between LiNux UF and HF as observed using tcpdum... See more...
Hi Guys I have issue for the newly setup HF and UF. The windows UF’s logs are reaching the Indexers while the Linux UF are not. Communication is ok between LiNux UF and HF as observed using tcpdump. The linux UF is sending traffics and HF received and process it. can you help what needs to check on UF or HF?
hi @Xander13 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Thank you guys. Issue was resolved. There is NOEXEC restriction configured on the account in SUDOERS file.    
Hi @Xander13 , you're using the splunk user to run the upgrade and probably there are some files owned by root. You have two choices: run the upgrade by root, run by root the command "chown -R s... See more...
Hi @Xander13 , you're using the splunk user to run the upgrade and probably there are some files owned by root. You have two choices: run the upgrade by root, run by root the command "chown -R splunk:splunk /opt/splunk" and then run the upgrade by splunk user Ciao. Giuseppe
Hi @Xander13  the error - Error calling execve(): Permission denied was discussed in this post. could you pls check this once, thanks.  https://community.splunk.com/t5/Getting-Data-In/When-trying-... See more...
Hi @Xander13  the error - Error calling execve(): Permission denied was discussed in this post. could you pls check this once, thanks.  https://community.splunk.com/t5/Getting-Data-In/When-trying-to-start-Splunk-I-m-getting-an-quot-execve/m-p/119749    
yes, that is exactly what I want. I have one pipeline, that starts several pipelines and they all started from the same traceid. That is the reason I want to point to a specific time frame so I wil... See more...
yes, that is exactly what I want. I have one pipeline, that starts several pipelines and they all started from the same traceid. That is the reason I want to point to a specific time frame so I will see the exact trace that is related directly to this pipeline.
Thanks, but it didn't help, I still see all the traces even if I changed it to -5m or -1m the startTime
1.  index=abc source="/opt/src/datasource.tmp" | dedup _raw | table Servers | stats count(Servers) as Total 2.  index=abc source="/opt/src/datasource.tmp" | dedup _raw | table CompletedServers | s... See more...
1.  index=abc source="/opt/src/datasource.tmp" | dedup _raw | table Servers | stats count(Servers) as Total 2.  index=abc source="/opt/src/datasource.tmp" | dedup _raw | table CompletedServers | stats count(CompletedServers) as Completed As @PickleRick points out, these searches you posted reveal potentially deeper problems that is your data.  If there is a need to dedup _raw, you should try clean up data first.  Also, there should never be two separate index searches using the same source.  PickleRick already illustrated a single search to get the same counts.   Let me further point out that most likely, the two searches produce the exact same count if Servers and CompletedServers appear in the same events. But back to your original table ServerName             UpgradeStatus ==========         ============= Server1                     Completed Server2                     Completed Server3                     Completed Server4                     Completed Server5                     Completed Server6                     Completed Server7                     Pending Server8                     Pending Server9                     Pending Server10                  Pending Obviously, neither of your searches will provide those "Pending" ones.  When asking a question in a public forum, it is really important to explain your input and output.  It is obvious that you did not think @sainag_splunk's and my previous answers did not give you the solution because you didn't even have the table. Because If you did, either of our searches will have given you the table you needed. So, I venture to guess that the real question is how to derive the first table from the index data you have.  Once this table is formed, either of our suggestions would have given you the display you wanted.  Is this correct? Back to the problem of UpgradeStatus.  When I point out that your searches do not produce Pending values, the big question is: What is in CompletedStatus?  Does it give "Completed" for some ServerName, and "Pending" for others?  And what is the field name that gives you ServerName?  Is it Servers used in your first search? If both are true, and that ServerName and CompletedStatus appear in the same events, the solution is as simple as index=abc source="/opt/src/datasource.tmp" | stats dc(Servers) as count by CompletedStatus | eventstats sum(count) as total | eval count = count . " (" . round(count / total * 100) . "%)" | fields - total | transpose header_field=CompletedStatus | fields - column In other words, all that change from my previous answer is field names that I guess from the two meaningless searches. Here are my four commandments of asking answerable data analytics questions: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious. Volunteers are not your mind readers.  It is unfair to ask unanswerable questions here.