All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am trying to get the data from Splunk into PowerBI. For that, I made a connection between Splunk and Power BI through the Splunk ODBC driver.
@inventsekar This one is actually a bit different from those two yesterday's threads I merged into one. @Real_captainInline extractions must use named capture groups which directly translate to extr... See more...
@inventsekar This one is actually a bit different from those two yesterday's threads I merged into one. @Real_captainInline extractions must use named capture groups which directly translate to extracted fields (with transform-based extractions you can use numbered capture groups to define fields). So you can simply do | rex "your_regex_here" With just one caveat. Since the argument to rex command is a string you have to properly escape all necessary characters (mostly quotes and backslashes).
Hi @Real_captain, could you pls avoid creating duplicate posts on your yesterday's post, could you pls provide us some more suggestions, details.. then troubleshooting your issue will become easier.... See more...
Hi @Real_captain, could you pls avoid creating duplicate posts on your yesterday's post, could you pls provide us some more suggestions, details.. then troubleshooting your issue will become easier. thanks. 
Hi @emmanuelkatto23  >>> Are there particular search query optimizations I should consider to speed up the execution time, especially with complex queries? On the DMC Distributed Management Console... See more...
Hi @emmanuelkatto23  >>> Are there particular search query optimizations I should consider to speed up the execution time, especially with complex queries? On the DMC Distributed Management Console, you can find many dashboard(s)/Panels... using these  you can find which Searches took long time to run, which searches took more resources, etc. There are many other considerations like avoiding the join's etc.  1) Pls suggest us which Splunk Apps you use, 2) which Splunk things(user searches, reports or alerts or dashboards) are taking high Splunk performance..  then you can start fine-tuning one by one, thanks. 
OK. First things first. What did you do to "connect Splunk with Power BI"? Are you ingesting data from Power BI into Splunk? (how?) or are you getting the data from Splunk into PowerBI? (again - how... See more...
OK. First things first. What did you do to "connect Splunk with Power BI"? Are you ingesting data from Power BI into Splunk? (how?) or are you getting the data from Splunk into PowerBI? (again - how?).
Oh, mate... You're trying to tackle several years of experience with a quick forum post. Optimizing searches (just like any programming optimizations) is partly science, partly art. You need to un... See more...
Oh, mate... You're trying to tackle several years of experience with a quick forum post. Optimizing searches (just like any programming optimizations) is partly science, partly art. You need to understand how Splunk works - how it breaks an event into single terms, how it stores those terms in indexes, how it searches for data, especially in distributed architecture, know the different command types and understand how they impact your search processing, undertand what the datamodels are and what they aren't (and what accelerated datamodel means; datamodel acceleration is not the same as datamodel itself), how accelerations work. It's not straightforward but it's not impossible of course. One thing - datamodel on its own doesn't accelerate searching - it can accelerate writing searches because the datamodel definition separates your high-level search from actual low-level details of your data. You don't have to - for example - care whether your firewall produces logs with a source IP field called src_ip, src, source_ip or whatever the developers wanted. If your logs were made CIM-compliant by relevant add-on, you can just search from Network_Traffic datamodel using the src_ip field. And that's it. Datamodel on its own doesn't give you more than that. But if you enable datamodel acceleration, Splunk periodically searches through your data covered by particular datamodel and builds a pre-indexed summary which you can search faster than raw data from underlying datamodel.  
Hi Team  Can you please let me know how can i use the below Field extraction formula directly using the rex command ?  Field extraction formula :  ^(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?... See more...
Hi Team  Can you please let me know how can i use the below Field extraction formula directly using the rex command ?  Field extraction formula :  ^(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH>[^"]+)      
Wait a second. Firstly you've been asking about "sorting" json fields on output of the search (at least that's how I understood your question). Now you're saying you want to modify _raw event. By "mo... See more...
Wait a second. Firstly you've been asking about "sorting" json fields on output of the search (at least that's how I understood your question). Now you're saying you want to modify _raw event. By "modifying" I understand that you want to do it before the event is written to an index. Manipulating structured data with just regexes is not a very good idea (maybe except for very easy cases but even then I'd be very careful).
Hi @Siddharthnegi , you can do this only displaying in your dashboard the records from a KV-Store lookup because Splunk isn't a database where you can delete records. But a kv-store lookup is a dat... See more...
Hi @Siddharthnegi , you can do this only displaying in your dashboard the records from a KV-Store lookup because Splunk isn't a database where you can delete records. But a kv-store lookup is a database table in which you can store e.g. your open cases. Then you have add an html button in your dashboard (Classic dashboard) and you have to create a java script that executes a search that deletes a record from the lookup and then displays again the updated lookup. In other words, in the dashboard you ha to add: <row> <panel> <html> <button class="btn btn-primary button1">Save</button> <a class="btn btn-primary" href="all_cases">Cancel</a> </html> </panel> </row> <row> then you have to create a js like the following (obviously must to be adapted to your lookup to be inserted in $SPLUNK_HOME/etc/apps/<my_app>/appserver/static require([ "splunkjs/mvc", "splunkjs/mvc/searchmanager", "splunkjs/mvc/savedsearchmanager", "splunkjs/mvc/utils", "splunkjs/mvc/simplexml/ready!" ], function( mvc, SearchManager, SavedSearchManager, ) { var query; var tokens = mvc.Components.get("default"); tokens.unset("status_to_update", null); tokens.unset("notes_to_update", null); tokens.unset("username_to_update", null); $(".button1").on("click", function (){ //var tokens = mvc.Components.get("default"); var username=Splunk.util.getConfigValue("USERNAME"); var mod_status = tokens.get("status_to_update"); var mod_notes = tokens.get("notes_to_update"); var mod_username = tokens.get("username_to_update"); var username_updated = tokens.get("username_updated"); var key = tokens.get("key"); //alert(username_updated); //tokens.unset("status_to_update"); //tokens.unset("notes_to_update"); //tokens.unset("username_to_update"); if(mod_username==null) { } else { username=mod_username;} if (mod_status==null) { query = "| inputlookup open_cases | eval Notes=if(_key=\"" + key + "\",\"" + mod_notes + "\",Notes), Status=\"Work-in-progress\", User_Name=if(_key=\"" + key + "\",\"" + username + "\",User_Name) | search _key=\"" + key + "\" | outputlookup open_cases append=true | eval key=_key | collect addtime=true index=summary_alerts | eval Time=strftime(TimeStamp,\"%d/%m/%Y %H:%M:%S\"), key=_key | table key Time TimeStamp Alert_Name Description Status Notes User_Name"; //alert (query); } else { query = "| inputlookup open_cases | eval Status=if(_key=\"" + key + "\",\"" + mod_status + "\",Status), Notes=if(_key=\"" + key + "\",\"" + mod_notes + "\",Notes), User_Name=if(_key=\"" + key + "\",\"" + username + "\",User_Name) | search _key=\"" + key + "\" | outputlookup open_cases append=true | eval key=_key | collect addtime=true index=summary_alerts | eval Time=strftime(TimeStamp,\"%d/%m/%Y %H:%M:%S\"), key=_key | table key Time TimeStamp Alert_Name Description Status Notes User_Name"; //alert (query); } var ok = confirm("Are you sure?"); if (ok){ launchquery(query); } //else { // alert('user did not click ok!'); //} }); function launchquery(query) { var mysearch = new SearchManager({ id: "mysearch", autostart: "false", //search: "| inputlookup open_cases | eval Status=if(_key=\"$key$\",\"$status_updated$\",Status), Notes=if(_key=\"$key$\",\"$notes_updated$\",Notes), User_Name=if(_key=\"$key$\",\"$username_updated$\",User_Name) | search _key=\"$key$\" | outputlookup open_cases append=true | eval key=_key | collect addtime=true index=summary_alerts | eval Time=strftime(TimeStamp,\"%d/%m/%Y %H:%M:%S\"), key=_key | table key Time TimeStamp Alert_Name Description Status Notes User_Name" search: query }); mysearch.on('search:failed', function(properties) { // Print the entire properties object console.log("FAILED:", properties); }); mysearch.on('search:progress', function(properties) { // Print just the event count from the search job console.log("IN PROGRESS.\nEvents so far:", properties.content.eventCount); }); mysearch.on('search:done', function(properties) { // Print the search job properties console.log("DONE!\nSearch job properties:", properties.content); }); window.location.reload(); } }); I cannot help you more because I'm not an expert in JS developing. Ciao. Giuseppe
Hi I have a Dashboard and i want to add a button , so when somebody solves that particular issue he/she can click on that button and it will change status to solved and it will be removed from dashbo... See more...
Hi I have a Dashboard and i want to add a button , so when somebody solves that particular issue he/she can click on that button and it will change status to solved and it will be removed from dashboard. for eg: I have a issue on a device and i solved that issue so then i can click on that button and it will make that issue status solved or will be removed from the dashboard.
I am finally successful in connecting Splunk with Power BI. But while adding a new source and getting data in Power BI, the data models I see are different from those I see in the Splunk datasets tab... See more...
I am finally successful in connecting Splunk with Power BI. But while adding a new source and getting data in Power BI, the data models I see are different from those I see in the Splunk datasets tab's interface and I also do not find the table view I created in Splunk.
Hi @emmanuelkatto23 , when you have a large amount of datasets, my hint is to use an accelerated Data Model (https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/Acceleratedatamodels )... See more...
Hi @emmanuelkatto23 , when you have a large amount of datasets, my hint is to use an accelerated Data Model (https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/Acceleratedatamodels ) or a summary index (https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Knowledge/Aboutsummaryindexing ) or a report acceleration (https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Knowledge/Manageacceleratedsearchsummaries#Manage_report_acceleration ). Obviously the first hint is to exactly define the time range to use in your searches avoiding large time ranges and delimitating them to what you need for your use case. Then (always obvious), you need a very well performant storage: Splunk requires at least 800 IOPS, but if you can use SSD disks (with more tham 10,000 IOPS) at least for the Hot and Warm buckets you'll have more performant searches. Ciao. Giuseppe
@PickleRick is correct that you cannot "automate" this from outside of Splunk because the nature of an alert is not to carry all context. (Otherwise you wouldn't be asking this question.) Meanwhile,... See more...
@PickleRick is correct that you cannot "automate" this from outside of Splunk because the nature of an alert is not to carry all context. (Otherwise you wouldn't be asking this question.) Meanwhile, I will take a totally different approach from your stated approach.   Start from this dummy "alert" as an example. index=_internal log_level=ERROR earliest=-5m | stats count | where count > 50 | sendalert dummyaction If, say, every time this alert is triggered, you want a search to give you all raw events used in this alert to your email.  Schedule the following the same way the alert is scheduled: index=_internal log_level=error earliest=-5m | stats count | where count > 100 | map search search="search index=_internal log_level=error earliest=-5m" | sendemail to="elvis@splunk.com" sendresults=true There are many ways to refine and develop this idea, many different commands to choose from and many ways to customize according to what additional information you need for your investigation.  The bottom line is: You don't run a "script" (to respond to a triggered alert).  Just use the same filter to trigger an action that gives you the appropriate level of detail.
Hi @Real_captain  This straight forward method may not work if your data format is changed.  Using the "split" Command will be simple and effective method.  |makeresults | eval FIELD1 = "ABCD/EFGH... See more...
Hi @Real_captain  This straight forward method may not work if your data format is changed.  Using the "split" Command will be simple and effective method.  |makeresults | eval FIELD1 = "ABCD/EFGH/IJ/KL/MN/OP/QRST" | rex field=FIELD1 "(?P<Field_1>\w+)\/(?P<Field_2>\w+)\/(?P<Field_3>\w+)\/(?P<Field_4>\w+)\/(?P<Field_5>\w+)\/(?P<Field_6>\w+)\/(?P<Field_7>\w+)" | table FIELD1 Field_1 Field_2 Field_3 Field_4 Field_5 Field_6 Field_7    
Hi everyone, My name is Emmanuel Katto. I’m currently working on a project where I need to analyze large datasets in Splunk, and I've noticed that the search performance tends to degrade as the data... See more...
Hi everyone, My name is Emmanuel Katto. I’m currently working on a project where I need to analyze large datasets in Splunk, and I've noticed that the search performance tends to degrade as the dataset size increases. I'm looking for best practices or tips on how to optimize search performance in Splunk.   What are the recommended indexing strategies for managing large volumes of data efficiently? Are there particular search query optimizations I should consider to speed up the execution time, especially with complex queries? How can I effectively utilize data models to improve performance in my searches? I appreciate any insights or experiences you can share. Thank you in advance for your help! Best, Emmanuel Katto  
Hi @Real_captain , the mode=sed was not from the field extraction wizard. may i know why you thought to use the mode=sed, pls suggest.  As you can see,  Syntax: mode=sed Description: Specify to i... See more...
Hi @Real_captain , the mode=sed was not from the field extraction wizard. may i know why you thought to use the mode=sed, pls suggest.  As you can see,  Syntax: mode=sed Description: Specify to indicate that you are using a sed (UNIX stream editor) expression. sed-expression Syntax: "<string>" Description: When mode=sed, specify whether to replace strings (s) or substitute characters (y) in the matching regular expression.  https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Rex#Syntax not sure, but lets try:   | rex field=Message "(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH1>[^"]+)"     Sample log lines will be helpful to troubleshoot this, thanks.  
Thanks @richgalloway for your response. I tried with  | where hostname like hostname_pattern also | where hostname like hostname_pattern its not returning any search results.
I had same understanding, thanks for confirming that. I am asked to modify the raw event that we receive in JSON format to include new key value pair and to replace the value of one of the field valu... See more...
I had same understanding, thanks for confirming that. I am asked to modify the raw event that we receive in JSON format to include new key value pair and to replace the value of one of the field value for a specific key.
This was my last fall back option as I have multiple fields and the query would become lengthy. This also gives me the flexibility to add extra fields to the _raw event. I am just assuming that splun... See more...
This was my last fall back option as I have multiple fields and the query would become lengthy. This also gives me the flexibility to add extra fields to the _raw event. I am just assuming that splunk has some inbuilt solution that I might be missing
Ok. Unless you do something very very strange Splunk component should be reading and indexing or forwarding its own internal logs. That's why I asked about the internal logs. Your output from list mo... See more...
Ok. Unless you do something very very strange Splunk component should be reading and indexing or forwarding its own internal logs. That's why I asked about the internal logs. Your output from list monitor shows just that. And it's a normal thing. I asked how you checked whether you're getting the data or not because it's a fairly typical case when your source has misconfigured time settings (either the clock is not in sync or the timezone is wrongly set up) that the data is actually indexed but at the wrong point in time so when you're  searching for "last 15 minutes" or last few hour it doesn't show in search but the data is there. Just badly onboarded. Try searching for those "not working" hosts over a bigger time range (you could risk all-time especially if you do it with tstats) | tstats min(_time) max(_time) count where index=_internal host=<your_forwarder_> I'm assuming your data flow is UF->HF->idx, right? Windows UFs go through the same HFs as linux ones? Look for information about connection established to the downstream HF on UF's splunkd.log (or errors). If there are errors, look for corresponding errors/warnings on HF's side.