All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Siddharthnegi , you can do this only displaying in your dashboard the records from a KV-Store lookup because Splunk isn't a database where you can delete records. But a kv-store lookup is a dat... See more...
Hi @Siddharthnegi , you can do this only displaying in your dashboard the records from a KV-Store lookup because Splunk isn't a database where you can delete records. But a kv-store lookup is a database table in which you can store e.g. your open cases. Then you have add an html button in your dashboard (Classic dashboard) and you have to create a java script that executes a search that deletes a record from the lookup and then displays again the updated lookup. In other words, in the dashboard you ha to add: <row> <panel> <html> <button class="btn btn-primary button1">Save</button> <a class="btn btn-primary" href="all_cases">Cancel</a> </html> </panel> </row> <row> then you have to create a js like the following (obviously must to be adapted to your lookup to be inserted in $SPLUNK_HOME/etc/apps/<my_app>/appserver/static require([ "splunkjs/mvc", "splunkjs/mvc/searchmanager", "splunkjs/mvc/savedsearchmanager", "splunkjs/mvc/utils", "splunkjs/mvc/simplexml/ready!" ], function( mvc, SearchManager, SavedSearchManager, ) { var query; var tokens = mvc.Components.get("default"); tokens.unset("status_to_update", null); tokens.unset("notes_to_update", null); tokens.unset("username_to_update", null); $(".button1").on("click", function (){ //var tokens = mvc.Components.get("default"); var username=Splunk.util.getConfigValue("USERNAME"); var mod_status = tokens.get("status_to_update"); var mod_notes = tokens.get("notes_to_update"); var mod_username = tokens.get("username_to_update"); var username_updated = tokens.get("username_updated"); var key = tokens.get("key"); //alert(username_updated); //tokens.unset("status_to_update"); //tokens.unset("notes_to_update"); //tokens.unset("username_to_update"); if(mod_username==null) { } else { username=mod_username;} if (mod_status==null) { query = "| inputlookup open_cases | eval Notes=if(_key=\"" + key + "\",\"" + mod_notes + "\",Notes), Status=\"Work-in-progress\", User_Name=if(_key=\"" + key + "\",\"" + username + "\",User_Name) | search _key=\"" + key + "\" | outputlookup open_cases append=true | eval key=_key | collect addtime=true index=summary_alerts | eval Time=strftime(TimeStamp,\"%d/%m/%Y %H:%M:%S\"), key=_key | table key Time TimeStamp Alert_Name Description Status Notes User_Name"; //alert (query); } else { query = "| inputlookup open_cases | eval Status=if(_key=\"" + key + "\",\"" + mod_status + "\",Status), Notes=if(_key=\"" + key + "\",\"" + mod_notes + "\",Notes), User_Name=if(_key=\"" + key + "\",\"" + username + "\",User_Name) | search _key=\"" + key + "\" | outputlookup open_cases append=true | eval key=_key | collect addtime=true index=summary_alerts | eval Time=strftime(TimeStamp,\"%d/%m/%Y %H:%M:%S\"), key=_key | table key Time TimeStamp Alert_Name Description Status Notes User_Name"; //alert (query); } var ok = confirm("Are you sure?"); if (ok){ launchquery(query); } //else { // alert('user did not click ok!'); //} }); function launchquery(query) { var mysearch = new SearchManager({ id: "mysearch", autostart: "false", //search: "| inputlookup open_cases | eval Status=if(_key=\"$key$\",\"$status_updated$\",Status), Notes=if(_key=\"$key$\",\"$notes_updated$\",Notes), User_Name=if(_key=\"$key$\",\"$username_updated$\",User_Name) | search _key=\"$key$\" | outputlookup open_cases append=true | eval key=_key | collect addtime=true index=summary_alerts | eval Time=strftime(TimeStamp,\"%d/%m/%Y %H:%M:%S\"), key=_key | table key Time TimeStamp Alert_Name Description Status Notes User_Name" search: query }); mysearch.on('search:failed', function(properties) { // Print the entire properties object console.log("FAILED:", properties); }); mysearch.on('search:progress', function(properties) { // Print just the event count from the search job console.log("IN PROGRESS.\nEvents so far:", properties.content.eventCount); }); mysearch.on('search:done', function(properties) { // Print the search job properties console.log("DONE!\nSearch job properties:", properties.content); }); window.location.reload(); } }); I cannot help you more because I'm not an expert in JS developing. Ciao. Giuseppe
Hi I have a Dashboard and i want to add a button , so when somebody solves that particular issue he/she can click on that button and it will change status to solved and it will be removed from dashbo... See more...
Hi I have a Dashboard and i want to add a button , so when somebody solves that particular issue he/she can click on that button and it will change status to solved and it will be removed from dashboard. for eg: I have a issue on a device and i solved that issue so then i can click on that button and it will make that issue status solved or will be removed from the dashboard.
I am finally successful in connecting Splunk with Power BI. But while adding a new source and getting data in Power BI, the data models I see are different from those I see in the Splunk datasets tab... See more...
I am finally successful in connecting Splunk with Power BI. But while adding a new source and getting data in Power BI, the data models I see are different from those I see in the Splunk datasets tab's interface and I also do not find the table view I created in Splunk.
Hi @emmanuelkatto23 , when you have a large amount of datasets, my hint is to use an accelerated Data Model (https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/Acceleratedatamodels )... See more...
Hi @emmanuelkatto23 , when you have a large amount of datasets, my hint is to use an accelerated Data Model (https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/Acceleratedatamodels ) or a summary index (https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Knowledge/Aboutsummaryindexing ) or a report acceleration (https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Knowledge/Manageacceleratedsearchsummaries#Manage_report_acceleration ). Obviously the first hint is to exactly define the time range to use in your searches avoiding large time ranges and delimitating them to what you need for your use case. Then (always obvious), you need a very well performant storage: Splunk requires at least 800 IOPS, but if you can use SSD disks (with more tham 10,000 IOPS) at least for the Hot and Warm buckets you'll have more performant searches. Ciao. Giuseppe
@PickleRick is correct that you cannot "automate" this from outside of Splunk because the nature of an alert is not to carry all context. (Otherwise you wouldn't be asking this question.) Meanwhile,... See more...
@PickleRick is correct that you cannot "automate" this from outside of Splunk because the nature of an alert is not to carry all context. (Otherwise you wouldn't be asking this question.) Meanwhile, I will take a totally different approach from your stated approach.   Start from this dummy "alert" as an example. index=_internal log_level=ERROR earliest=-5m | stats count | where count > 50 | sendalert dummyaction If, say, every time this alert is triggered, you want a search to give you all raw events used in this alert to your email.  Schedule the following the same way the alert is scheduled: index=_internal log_level=error earliest=-5m | stats count | where count > 100 | map search search="search index=_internal log_level=error earliest=-5m" | sendemail to="elvis@splunk.com" sendresults=true There are many ways to refine and develop this idea, many different commands to choose from and many ways to customize according to what additional information you need for your investigation.  The bottom line is: You don't run a "script" (to respond to a triggered alert).  Just use the same filter to trigger an action that gives you the appropriate level of detail.
Hi @Real_captain  This straight forward method may not work if your data format is changed.  Using the "split" Command will be simple and effective method.  |makeresults | eval FIELD1 = "ABCD/EFGH... See more...
Hi @Real_captain  This straight forward method may not work if your data format is changed.  Using the "split" Command will be simple and effective method.  |makeresults | eval FIELD1 = "ABCD/EFGH/IJ/KL/MN/OP/QRST" | rex field=FIELD1 "(?P<Field_1>\w+)\/(?P<Field_2>\w+)\/(?P<Field_3>\w+)\/(?P<Field_4>\w+)\/(?P<Field_5>\w+)\/(?P<Field_6>\w+)\/(?P<Field_7>\w+)" | table FIELD1 Field_1 Field_2 Field_3 Field_4 Field_5 Field_6 Field_7    
Hi everyone, My name is Emmanuel Katto. I’m currently working on a project where I need to analyze large datasets in Splunk, and I've noticed that the search performance tends to degrade as the data... See more...
Hi everyone, My name is Emmanuel Katto. I’m currently working on a project where I need to analyze large datasets in Splunk, and I've noticed that the search performance tends to degrade as the dataset size increases. I'm looking for best practices or tips on how to optimize search performance in Splunk.   What are the recommended indexing strategies for managing large volumes of data efficiently? Are there particular search query optimizations I should consider to speed up the execution time, especially with complex queries? How can I effectively utilize data models to improve performance in my searches? I appreciate any insights or experiences you can share. Thank you in advance for your help! Best, Emmanuel Katto  
Hi @Real_captain , the mode=sed was not from the field extraction wizard. may i know why you thought to use the mode=sed, pls suggest.  As you can see,  Syntax: mode=sed Description: Specify to i... See more...
Hi @Real_captain , the mode=sed was not from the field extraction wizard. may i know why you thought to use the mode=sed, pls suggest.  As you can see,  Syntax: mode=sed Description: Specify to indicate that you are using a sed (UNIX stream editor) expression. sed-expression Syntax: "<string>" Description: When mode=sed, specify whether to replace strings (s) or substitute characters (y) in the matching regular expression.  https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Rex#Syntax not sure, but lets try:   | rex field=Message "(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH1>[^"]+)"     Sample log lines will be helpful to troubleshoot this, thanks.  
Thanks @richgalloway for your response. I tried with  | where hostname like hostname_pattern also | where hostname like hostname_pattern its not returning any search results.
I had same understanding, thanks for confirming that. I am asked to modify the raw event that we receive in JSON format to include new key value pair and to replace the value of one of the field valu... See more...
I had same understanding, thanks for confirming that. I am asked to modify the raw event that we receive in JSON format to include new key value pair and to replace the value of one of the field value for a specific key.
This was my last fall back option as I have multiple fields and the query would become lengthy. This also gives me the flexibility to add extra fields to the _raw event. I am just assuming that splun... See more...
This was my last fall back option as I have multiple fields and the query would become lengthy. This also gives me the flexibility to add extra fields to the _raw event. I am just assuming that splunk has some inbuilt solution that I might be missing
Ok. Unless you do something very very strange Splunk component should be reading and indexing or forwarding its own internal logs. That's why I asked about the internal logs. Your output from list mo... See more...
Ok. Unless you do something very very strange Splunk component should be reading and indexing or forwarding its own internal logs. That's why I asked about the internal logs. Your output from list monitor shows just that. And it's a normal thing. I asked how you checked whether you're getting the data or not because it's a fairly typical case when your source has misconfigured time settings (either the clock is not in sync or the timezone is wrongly set up) that the data is actually indexed but at the wrong point in time so when you're  searching for "last 15 minutes" or last few hour it doesn't show in search but the data is there. Just badly onboarded. Try searching for those "not working" hosts over a bigger time range (you could risk all-time especially if you do it with tstats) | tstats min(_time) max(_time) count where index=_internal host=<your_forwarder_> I'm assuming your data flow is UF->HF->idx, right? Windows UFs go through the same HFs as linux ones? Look for information about connection established to the downstream HF on UF's splunkd.log (or errors). If there are errors, look for corresponding errors/warnings on HF's side.
  From UF, ./splunk list monitor tcpdump from UF, checking traffic on HF's IP     Tcpdump from HF, checking traffic of UF's IP          
1. Infra - UF(Windows and Redhat8.10) and HF (Redhat 9.4) is in Azure. Logs are forwarded to Indexers (Remote - On prem). 2. Windows logs (UF) are received by Indexers. 3. Linux (UF) logs are not r... See more...
1. Infra - UF(Windows and Redhat8.10) and HF (Redhat 9.4) is in Azure. Logs are forwarded to Indexers (Remote - On prem). 2. Windows logs (UF) are received by Indexers. 3. Linux (UF) logs are not received by Indexers. 4. From Linux UF, ./splunk list monitor list all log name to be forwarded.     Established connection on port on both UF and HF IP address when checking using netstat -an 5. Continuous traffic observed going out from UF to HF (sync and ack on tcpdump) 6. Yes. What exactly to check in splunkd.log?   What commands to use to confirm if logs are forwarded from UF to HF. Then HF to Indexer?  
The fields can be extracted using the rex command or by using the split function (and perhaps others).   | eval FIELDS = split(FIELD1, "/") | rex field=FIELD1 max_match=0 "(?<FIELDS>[^\/]+)"   B... See more...
The fields can be extracted using the rex command or by using the split function (and perhaps others).   | eval FIELDS = split(FIELD1, "/") | rex field=FIELD1 max_match=0 "(?<FIELDS>[^\/]+)"   Both commands will extract the fields into a multi-value field so iyou'll need to assign them to separate fields. | foreach 1 2 3 4 5 6 7 [eval FIELD_<<FIELD>>=mvindex(FIELDS,<<FIELD>>-1)]  
ENOTENOUGHINFO But seriously. Firstly, what does your infrastructure look like? Secondly, do you get _any_ logs from any of your new hosts (including internal indexes)? Thirdly, how did you verify t... See more...
ENOTENOUGHINFO But seriously. Firstly, what does your infrastructure look like? Secondly, do you get _any_ logs from any of your new hosts (including internal indexes)? Thirdly, how did you verify that the data is not ingested? Fourthly, did you do any more troubleshooting or just the tcpdump? Fifthly, what do you see in your tcpdump output? Sixthly, did you check splunkd.log on involved hosts?
1. Those are slashes, not backslashes 2. Is the number of fields constant? If not, you can't use regex alone to split it into fields. 3. Isn't splitting the string with the eval split() function en... See more...
1. Those are slashes, not backslashes 2. Is the number of fields constant? If not, you can't use regex alone to split it into fields. 3. Isn't splitting the string with the eval split() function enough?
As a general task it's simply impossible. How are you supposed to know whether your results come from a search index=windows | stats count or | makeresults | eval count=10 | table count Ok, this ... See more...
As a general task it's simply impossible. How are you supposed to know whether your results come from a search index=windows | stats count or | makeresults | eval count=10 | table count Ok, this is an extreme example but should show my point fairly well - without a lot of assumptions you can't know what data the results came from. The main issue with your problem is not the tool (although you probably want something that has ready-made libraries to interface with Splunk so you don't have to reinvent the wheel). The main issue is the method you'd want to use to build such search. This is something you'd have to give the most consideration to.
Hi Team Can someone please help me to extract the backslash separated field into multiple fields ?  Example : Field is present in Splunk as below :  Field = ABCD/EFG6/CR/IN/OU/XY/BMW I need to us... See more...
Hi Team Can someone please help me to extract the backslash separated field into multiple fields ?  Example : Field is present in Splunk as below :  Field = ABCD/EFG6/CR/IN/OU/XY/BMW I need to use the rex command to extract the able field into 7 fields as below: Field_1 = ABCD Field_2 = EFG6 Field_3 = CR Field_4 = IN Field_5 = OU Field_6 = XY Field_7 = BMW   In case value of the file is below :  Field = ABCD  Then rex command generates the 7 fields as below :  Field_1 = ABCD Field_2 =   Field_3 =   Field_4 =   Field_5 =   Field_6 =   Field_7 =       
On the other hand - if you don't know which buckets come from which index... well, that's too bad. You're most probably gonna restore data from several indexes into a single one. As @richgalloway sai... See more...
On the other hand - if you don't know which buckets come from which index... well, that's too bad. You're most probably gonna restore data from several indexes into a single one. As @richgalloway said - frozen buckets (actually any buckets) don't have anything _inside_ them that would indicate which index they are from. It's where they are placed that decides which index they belong to.