All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ah - sometimes the easy answer are the answer, but sometimes they're not! So, from what I can see of fill_summary_index.py, the dedup option isn't actually magic.  That means there's no reason you c... See more...
Ah - sometimes the easy answer are the answer, but sometimes they're not! So, from what I can see of fill_summary_index.py, the dedup option isn't actually magic.  That means there's no reason you can't just make a few minor modifications (mostly to timeframes) to just backfill the summary index manually. Indeed, there's no magic here anyway.  If fill_summary_index.py is not filling in your blank areas in the summary index correctly using the saved search from the "regular" collector for the summary index, then it seems to me that it's likely that the main search simply isn't working right anyway. The reasoning here is that when it runs 'normally', it's running over a time period and dumping its output to that summary index. This is exactly what the backfilling version does, with the only difference being that it sets a different start/end time.  Again, no magic, just searches running over time periods. So, a couple of ways forward. 1) You could provide the search and maybe  we can spot why it doesn't work right for backfilling. 2) You could craft up a "deduplication search" that you can pass to the backfill function to tell it *how* to identify which periods need backfilling.  (I don't know how to do this, but the notes for the backfill function says you can do this, so I believe it.  And of course, just because I don't know how to do it right now doesn't mean we can't help figure it out, or someone else might!) 3) Or maybe you can just manually run the search that would do the backfilling, only manually selecting the timeframes so that you don't get duplication.  I mean, I'd guess it's just a standard saved search that ends up with `| collect...` at the end.  Anyway, I do hope this helps, and maybe this bump will get someone else who does this a lot to chime in - we'll see!  
Something like <your search> host IN (*location_a*, *location_b*) | fields inbound_rate outbound_rate host | eval location = if(match(host, "location_a", "location_a", "location_b")) ``` rex is usua... See more...
Something like <your search> host IN (*location_a*, *location_b*) | fields inbound_rate outbound_rate host | eval location = if(match(host, "location_a", "location_a", "location_b")) ``` rex is usually more code-economic, split is more efficient, etc ``` | addtotals fieldname=a_TPS | timechart span=5m sum(a_TPS) as a_TPS by location | addtotals Note: I assume that HOST (all caps) is the same field as Splunk's essential field host (all lower-case), therefore accessible in your index search.  Filtering in index search is more performant.  If the HOST field is not accessible in index search, you can still use a where clause; it's just less efficient.  Also, there can be many ways to calculate location but I am showing the least efficient method because I have no details about how location is embedded into host values and what regularities they have. (In my organization, for example, location is indicated in a fixed level of domain names, therefore I do not need match or rex.) Hope this helps.
Hi Splunkers, I need a help with my dashboard because of I`m stuck in this problem. I`ve already search, tried many javascript codes and still not working. Basically what I need is:  After clickin... See more...
Hi Splunkers, I need a help with my dashboard because of I`m stuck in this problem. I`ve already search, tried many javascript codes and still not working. Basically what I need is:  After clicking in a drilldown button , the result should be a table that show me more the details about a use case. Look at the first down arrow. When I click it should show me the details. But I cannot render a table. The token to mue results should be the value os the use_case_name. My javascript code:  requirejs([ '../app/simple_xml_examples/libs/underscore-1.6.0-umd-min', 'splunkjs/mvc/tableview', 'splunkjs/mvc/chartview', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function(_, TableView, ChartView, SearchManager, mvc) { var EventSearchBasedRowExpansionRenderer = TableView.BaseRowExpansionRenderer.extend({ initialize: function() { // initialize will run once, so we will set up a search and a chart to be reused. this._searchManager = new SearchManager({ id: 'details-search-manager', preview: false }); this._tableView = new TableView({ 'managerid': 'details-search-manager', 'charting.legend.placement': 'none' }); }, canRender: function(rowData) { // Since more than one row expansion renderer can be registered we let each decide if they can handle that // data // Here we will always handle it. return true; }, render: function($container, rowData) { // rowData contains information about the row that is expanded. We can see the cells, fields, and values // We will find the sourcetype cell to use its value var use_case_nameCell = _(rowData.cells).find(function (cell) { return cell.field === 'use_Case_name'; }); //update the search with the sourcetype that we are interested in // this._searchManager.set({ search: 'index=_internal sourcetype=' + sourcetypeCell.value + ' | table user | dedup user' }); this._searchManager.set({ search: '| inputlookup XXXX.csv | search use_case_name=' + use_case_nameCell.value + ' | table XXX | transpose' }); // $container is the jquery object where we can put out content. // In this case we will render our chart and add it to the $container // $container.append(this._chartView.render().el); $container.append(this._tableView.render().el); } }); var tableElement = mvc.Components.getInstance('expand_with_events'); tableElement.getVisualization(function(tableView) { // Add custom cell renderer, the table will re-render automatically. tableView.addRowExpansionRenderer(new EventSearchBasedRowExpansionRenderer()); tableView.table.render(); }); });   Thank you guys.
You can do this with an eventstats.  The exact method can depend on data characteristics and desired output.  The following assumes that index _add search returns fewer results than index _network se... See more...
You can do this with an eventstats.  The exact method can depend on data characteristics and desired output.  The following assumes that index _add search returns fewer results than index _network search, that every snat has at least one matching client_ip, and that you want to tabulate all combinations with client_ip. (index=_ad (EventCode=4625 OR (EventCode=4771 Failure_Code=0x18)) Account_Name=JohnDoe Source_Network_Address IN (10.10.10.10 20.20.20.20)) OR (index=_network snat IN (10.10.10.10*,20.20.20.20*)) ``` get relevant data ``` | bucket span=1m _time ``` common time buckets ``` | eval Source_Network_Address1 = case(EventCode==4771, trim(Client_Address, "::ffff:")) | eval SourceIP = Source_Network_Address | eval Account_Name4625= case(EventCode=4625,mvindex(Account_Name,1)) | eval Account_Name4771= case(EventCode=4771,Account_Name) | eval Account_Name = coalesce(Account_Name4771, Account_Name4625) | eval Source_Network_Address_Port = SourceIP+":"+Source_Port | rex field=ComputerName "(?<DCName>^([^.]+))" | rename Source_Network_Address_Port as snat ``` the above applies to index _ad ``` | rex field=client "^(?<client_ip>.*?)\:(?<client_port>.*)" ``` this applies to index _network ``` | eventstats values(client_ip) as client_ip by _time snat ``` assuming index _ad search returns fewer events ``` | stats count by _time snat Account_Name EventCode DCName client_ip If client_ip could be missing for some snat and you can accept multi value client_ip, change the last stats to | stats count values(client_ip) as client_ip by _time snat Account_Name EventCode DCName If event counts are opposite, use eventstats on the other dataset. Hope this helps.
Hi @PickleRick  Yes I did, I pointed all the peer nodes to the CM which is also my License Manager. 
I would like to start encrypting traffic between the universal forwarder on my Windows devices and my single Splunk 9.x indexer that is on a Windows server. For the moment I am only concerned with ge... See more...
I would like to start encrypting traffic between the universal forwarder on my Windows devices and my single Splunk 9.x indexer that is on a Windows server. For the moment I am only concerned with getting SSL going on the indexer. I see you can also setup a certificate on the clients for authentication to the server but I want to take it one step at a time.  I have a GoDaddy cert I would like to use with the indexer and I have looked over much of the documentation on Splunk's site on all the ways you can make this configuration work but it left me confused. I can't find any mention to what to do about the public key. I see where the documentation references the server certificate and even the sslPassword in the input.conf file but no reference where to to put the key location. Is it just assumed you combine the server cert + the private key into a single pem file and if so is the order just server cert first then private key? Example:   -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN PRIVATE KEY----- ... -----END PRIVATE KEY-----  
Again - it's not how it works. First, the application itself has to be able to generate - as we say - an "event" which will be either written to a file which Splunk's forwarder will be able to read ... See more...
Again - it's not how it works. First, the application itself has to be able to generate - as we say - an "event" which will be either written to a file which Splunk's forwarder will be able to read or sent via network (there are also other ways to receive or pull data into Splunk but these are the most popular ones). Then you have to ingest that data into Splunk. When you have this data in Splunk, yes you can schedule a report which will - for example - every 5 minutes check if/how many users logged into your system. But still, first and foremost, the application itself has to report this action somewhere so that Splunk can get such event. It's not a fortune teller you know
@gcusello  do you have any idea on troubleshooting this issue?
Thanks for the information. For the application I wanted to put an email alert on it for when someone is logging in and out of the application. Is that possible.
Yes. The additional options are one of the reasons for using TRANSFORM-based exractions instead of REPORT. Notice, however, that REPEAT_MATCH is for index-time extractions.  You might want to consid... See more...
Yes. The additional options are one of the reasons for using TRANSFORM-based exractions instead of REPORT. Notice, however, that REPEAT_MATCH is for index-time extractions.  You might want to consider MV_ADD
Splunk on its own is not a "monitoring tool" meaning that Splunk is not meant to do - for example - active checks against an application as monitoring suites do (it probably can be forced to do that ... See more...
Splunk on its own is not a "monitoring tool" meaning that Splunk is not meant to do - for example - active checks against an application as monitoring suites do (it probably can be forced to do that but it's not gonna be an optimal solution). Its forte is data analysis. So as long as you have data from external sources, you can put this data into Splunk, search it and analyze. Then - if you have events describing - for example - results of such checks, you can schedule an alert if there are too many failed probes or calculate whether the SLA levels were met or not.
I am very new to Splunk and having a hard time finding how to monitor applications. Can someone help? 
Hi, We are using following regex to capture "caused by" exceptions within java stack trace. Caused by: (?P<Exception>[^\r\n]+)   When testing in regex101, it seems to be working well. Captures bo... See more...
Hi, We are using following regex to capture "caused by" exceptions within java stack trace. Caused by: (?P<Exception>[^\r\n]+)   When testing in regex101, it seems to be working well. Captures both instances of "caused by" in the sample trace. https://regex101.com/r/yL1ucO/1  But when used with EXTRACT within props.conf, Splunk only gets the first instance, i.e. "SomeException". 2nd occurrence, "AnotherException" is not captured. Should I be using REPEAT_MATCH with transforms stanza, or is there a way to fix this within props itself?
Apologies I took out all the extra renames to try to simplify the search since those aren't really critical to the data I'm trying to get. The fields are actually as they are named in the full search... See more...
Apologies I took out all the extra renames to try to simplify the search since those aren't really critical to the data I'm trying to get. The fields are actually as they are named in the full search with the join. First search thus should be: index=api source=api_call | rename id as sessionID | fields apiName, message.payload, sessionID
I now get almost 2 million events, which is about all the events in the WAF log for yesterday, but no table of results. I know that yesterday there was 1 connection through the WAF which produced 6 A... See more...
I now get almost 2 million events, which is about all the events in the WAF log for yesterday, but no table of results. I know that yesterday there was 1 connection through the WAF which produced 6 API calls (one primary and then several downstream). So the number of lines in my table should be 6.
Hello, I'm trying to sum by groups (I have 2 groups) and then plot them individually and also the sum. I'm using following script to plot group 1. | fields inbound_rate outbound_rate HOST | where... See more...
Hello, I'm trying to sum by groups (I have 2 groups) and then plot them individually and also the sum. I'm using following script to plot group 1. | fields inbound_rate outbound_rate HOST | where HOST like "%location_a%" | addtotals fieldname=a_TPS | timechart span=5m sum(a_TPS) as a_TPS This works and sums all the server TPS from location a. Now I have servers in another location (location_b). How can I plot TPS for location a, location b and sum of both? Thanks.
Try specifying output_mode=json.  See https://docs.splunk.com/Documentation/Splunk/9.1.3/RESTUM/RESTusing#Encoding_schemes
addinfo adds the info_* fields to all the events in the event pipeline i.e. what ever is returned by your index search. makeresults (by default) created a single event. This can be changed with the c... See more...
addinfo adds the info_* fields to all the events in the event pipeline i.e. what ever is returned by your index search. makeresults (by default) created a single event. This can be changed with the count parameter, e.g. makeresults count=10
If you actually have tabs separating your fields (instead of commas), the issue is that you have used + (at least 1 occurrence) rather than * (zero or more occurrences) ^(?P<ACD>\w+\.\d+)\t(?P<ATTEM... See more...
If you actually have tabs separating your fields (instead of commas), the issue is that you have used + (at least 1 occurrence) rather than * (zero or more occurrences) ^(?P<ACD>\w+\.\d+)\t(?P<ATTEMPTS>[^\t]+)\t(?P<FAIL_REASON>[^\t]*)\t(?P<INTERVAL_FILE>[^\t]+)\t(?P<STATUS>\w+)\t(?P<START>[^\t]+)\t(?P<FINISH>[^\t]+)\t(?P<INGEST_TIME>.+)
Hi @Shashwat .Pandey, I looked around, and as mentioned above, the maximum number of exclusions is 500. This includes these types (and maybe more) case BASE_PAGE: case IFRAME: case VIRTUAL_PAGE:... See more...
Hi @Shashwat .Pandey, I looked around, and as mentioned above, the maximum number of exclusions is 500. This includes these types (and maybe more) case BASE_PAGE: case IFRAME: case VIRTUAL_PAGE: case AJAX_REQUEST: case SYNTH_JOB_REF: