All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Roy_9, I suppose it's a script from an add-on, which one? If it's a Splunk supported Add-On, you can open a case to Splunk Support. Are you sure that the issue is in the script, what does it h... See more...
Hi @Roy_9, I suppose it's a script from an add-on, which one? If it's a Splunk supported Add-On, you can open a case to Splunk Support. Are you sure that the issue is in the script, what does it happen if you disable it? Ciao. Giuseppe
Hello Splunkers!! I want to connect or configure Splunk with kafka. Our Kafka resides under kubernetes cluster. Please guide me what kind of approaches I want to follow. Because there are lot of s... See more...
Hello Splunkers!! I want to connect or configure Splunk with kafka. Our Kafka resides under kubernetes cluster. Please guide me what kind of approaches I want to follow. Because there are lot of stuffs available and its confusing for me. 
Hi @gcusello OK. Thanks for your advice. 
Hi @richgalloway , We had tried giving the output_mode parameter for Rest point but still we can see xml  response.
The thing is it does not listen at all on Linux after the mentioned version. On windows I could check and it works as defined. As it is written by default it is limited to localhost.  Anyways thanks... See more...
The thing is it does not listen at all on Linux after the mentioned version. On windows I could check and it works as defined. As it is written by default it is limited to localhost.  Anyways thanks for this info.
The report at startup indicates port 8089 is not in use by any process (it's "open" for use).  It does not mean Splunk is listening on that port (at least not yet). Version 9.0 changed the default b... See more...
The report at startup indicates port 8089 is not in use by any process (it's "open" for use).  It does not mean Splunk is listening on that port (at least not yet). Version 9.0 changed the default behavior of the UF's management port.  See the Release Notes at https://docs.splunk.com/Documentation/Splunk/9.0.8/ReleaseNotes/MeetSplunk#What.27s_New_in_9.0 https://docs.splunk.com/Documentation/Splunk/9.0.8/ReleaseNotes/MeetSplunk#What.27s_New_in_9.0
Percentage as the sum of values in each time bucket? index IN ("Index 1", "Index 2", "Index 3") | timechart count by index | addtotals | foreach * [eval <<FIELD>> = if(Total == 0, 0, <<FIELD>> /... See more...
Percentage as the sum of values in each time bucket? index IN ("Index 1", "Index 2", "Index 3") | timechart count by index | addtotals | foreach * [eval <<FIELD>> = if(Total == 0, 0, <<FIELD>> / Total * 100)] | fields - Total As @scelikok indicates, move index filter into index search is more efficient. (The above is an alternative syntax.)
Hi, I would like to know about the triggered notable events from CS without accessing the incident review dashboard, as we are experiencing a significant number of notables being triggered c... See more...
Hi, I would like to know about the triggered notable events from CS without accessing the incident review dashboard, as we are experiencing a significant number of notables being triggered consistently. How can we identify the source of noise from a specific correlation search?   Thanks in advance   
Before delving into regex details, could you explain what "badness" in the sample data that you are trying to rectify?  What are the expected results? (Also, please use code section that auto wraps.)... See more...
Before delving into regex details, could you explain what "badness" in the sample data that you are trying to rectify?  What are the expected results? (Also, please use code section that auto wraps.)  In the output of your sample code, the "good" entry is exactly unchanged from the original entry. (By the way, the alternative value in if function cannot be new.  It should be old_field.) To be clear, your sample code is not to replace non-alphanumeric characters at all, but to executes an extremely complex purpose-built matches.  If the sole goal is to replace non-alphanumeric characters globally, replace(old_field, "\W", "__non_alphanumeric__") suffices.  Here is a simple example to do this when old_field is the only field of interest. | makeresults | fields - _time | eval old_field = mvappend("{\"bundle\": \"com.servicenow.blackberry.ful\", \"name\": \"ServiceNow Agent\\u00ae - BlackBerry\", \"name_version\": \"ServiceNow Agent\\u00ae - BlackBerry-17.2.0\", \"sw_uid\": \"faa5c810a2bd2d5da418d72hd\", \"version\": \"17.2.0\", \"version_raw\": \"0000000170000000200000000\"}", "{\"bundle\": \"com.penlink.pen\", \"name\": \"PenPoint\", \"name_version\": \"PenPoint-1.0.1\", \"sw_uid\": \"cba7d3601855e050d8new0f34\", \"version\": \"1.0.1\", \"version_raw\": \"0000000010000000000000001\"}") | eval sourcetype="custom:data" ``` data emulation above ``` | mvexpand old_field | spath input=old_field | fields - old_field | foreach version * [eval <<FIELD>> = if(sourcetype == "custom:data", replace(<<FIELD>>, "\W", "__non_alphanumeric__"), <<FIELD>>)] | tojson output_field=new | stats values(new) as new The result is a two-value field {"bundle":"com__non_alphanumeric__penlink__non_alphanumeric__pen","name":"PenPoint","name_version":"PenPoint__non_alphanumeric__1__non_alphanumeric__0__non_alphanumeric__1","sourcetype":"custom__non_alphanumeric__data","sw_uid":"cba7d3601855e050d8new0f34","version":"1__non_alphanumeric__0__non_alphanumeric__1","version_raw":"0000000010000000000000001"} {"bundle":"com__non_alphanumeric__servicenow__non_alphanumeric__blackberry__non_alphanumeric__ful","name":"ServiceNow__non_alphanumeric__Agent__non_alphanumeric____non_alphanumeric____non_alphanumeric____non_alphanumeric__BlackBerry","name_version":"ServiceNow__non_alphanumeric__Agent__non_alphanumeric____non_alphanumeric____non_alphanumeric____non_alphanumeric__BlackBerry__non_alphanumeric__17__non_alphanumeric__2__non_alphanumeric__0","sourcetype":"custom__non_alphanumeric__data","sw_uid":"faa5c810a2bd2d5da418d72hd","version":"17__non_alphanumeric__2__non_alphanumeric__0","version_raw":"0000000170000000200000000"} Are you trying to replace, say "." with one alphanumeric string (e.g., "dot"), ":" with a different alphanumeric string (e.g., "colon") and so on and so forth?  If so, what are the rules? Simply put: Forget about regex at all.  Could you explain the logic between sample data and desired results?  Also, is the end goal to form a JSON field, or do you expect to extract JSON nodes into fields?
You need to first extract data beyond the "dynamic" key. (Depending on semantics, I suspect that there is some data design improvement your developers could make so downstream users don't have to do ... See more...
You need to first extract data beyond the "dynamic" key. (Depending on semantics, I suspect that there is some data design improvement your developers could make so downstream users don't have to do this goaround.)     | spath input=json_data path=data output=beyond | eval key = json_array_to_mv(json_keys(beyond)) | eval beyond = json_extract(beyond, key) ``` assuming there is only one top key ``` | spath input=beyond path=x | spath input=beyond path=y     The following is full emulation (I don't see the purpose of all the transposes)   | makeresults count=1 | eval json_data="{\"data\": {\"a\": {\"x\": {\"mock_x_field\": \"value_x\"}, \"y\": {\"mock_y_field\": \"value_y\"}}}}" | append [ makeresults count=1 | eval json_data="{\"data\": {\"b\": {\"x\": {\"mock_x_field\": \"value_x\"}, \"y\": {\"mock_y_field\": \"value_y\"}}}}" ] | append [ makeresults count=1 | eval json_data="{\"data\": {\"c\": {\"x\": {\"mock_x_field\": \"value_x\"}, \"y\": {\"mock_y_field\": \"value_y\"}}}}" ] | append [ makeresults count=1 | eval json_data="{\"data\": {\"d\": {\"x\": {\"mock_x_field\": \"value_x\"}, \"y\": {\"mock_y_field\": \"value_y\"}}}}" ] | spath input=json_data path=data output=beyond | eval key = json_array_to_mv(json_keys(beyond)) | eval beyond = json_extract(beyond, key) | spath input=beyond path=x | spath input=beyond path=y | table json_data x y beyond   json_data x y beyond {"data": {"a": {"x": {"mock_x_field": "value_x"}, "y": {"mock_y_field": "value_y"}}}} {"mock_x_field":"value_x"} {"mock_y_field":"value_y"} {"x":{"mock_x_field":"value_x"},"y":{"mock_y_field":"value_y"}} {"data": {"b": {"x": {"mock_x_field": "value_x"}, "y": {"mock_y_field": "value_y"}}}} {"mock_x_field":"value_x"} {"mock_y_field":"value_y"} {"x":{"mock_x_field":"value_x"},"y":{"mock_y_field":"value_y"}} {"data": {"c": {"x": {"mock_x_field": "value_x"}, "y": {"mock_y_field": "value_y"}}}} {"mock_x_field":"value_x"} {"mock_y_field":"value_y"} {"x":{"mock_x_field":"value_x"},"y":{"mock_y_field":"value_y"}} {"data": {"d": {"x": {"mock_x_field": "value_x"}, "y": {"mock_y_field": "value_y"}}}} {"mock_x_field":"value_x"} {"mock_y_field":"value_y"} {"x":{"mock_x_field":"value_x"},"y":{"mock_y_field":"value_y"}}
@richgalloway's solution should give the correct results and is more efficient.  But you need to clarify @danielcj's question thoroughly.  In your response, you reprinted | rename id as sessionID as ... See more...
@richgalloway's solution should give the correct results and is more efficient.  But you need to clarify @danielcj's question thoroughly.  In your response, you reprinted | rename id as sessionID as in the first part of your original post, which contradicts the second part of your original post where | rename message.id as sessionID is printed.  Does index api give id, or message.id, or both but only one of them should be used as sessionID? @richgalloway's solution should work in case 2.  If index api gives id (or if it gives both but only id should be used in match) - which the first part of your original post and your reply to @danielcj imply, the solution can easily be adapted to (index=api source=api_call) OR index=waf | fields id apiName, message.payload, src_ip, requestHost, requestPath, requestUserAgent, sessionID | eval sessionID = coalesce(sessionID, id) | stats values(*) as * by sessionID Hope this helps.
Maybe it's as easy as adding it in the where clause: | mstats latest_time(application_ready_time.value) as latest_ts WHERE index=my-metrics-index host=some-host app.name IN ("appname1", "appname2")... See more...
Maybe it's as easy as adding it in the where clause: | mstats latest_time(application_ready_time.value) as latest_ts WHERE index=my-metrics-index host=some-host app.name IN ("appname1", "appname2") BY app.name | eval past_threshold=if(now() - latest_ts >= 30, "Y", "N") | eval latest=strftime(latest_ts, "%Y-%m-%d %H:%M:%S") | table app.name latest past_threshold x Give that a try. If it doesn't work, you can just post filter it with something like | mstats latest_time(application_ready_time.value) as latest_ts where index=my-metrics-index host=some-host by app.name | search app.name IN ("appname1", "appname2") | eval past_threshold=if(now() - latest_ts >= 30, "Y", "N") | eval latest=strftime(latest_ts, "%Y-%m-%d %H:%M:%S") | table app.name latest past_threshold x The former with the app.names as part of the WHERE clause is probably preferable.  Splunk can often push these sorts of search terms down into the search itself, but I'm not sure if it does that with mstats.  Meaning, if it CAN do that, it'll perform the same as the first one (more or less).  But if it can't do that, it'll run quite a bit slower because it'll get all those stats off disk, then throw away all but the two sets you want to keep. But it would work, either way. 
Hi All, I updated Splunk Universal forwarder from 8.2.6 to 9.1.3 on a Debian host. No specific configuration basically, everything by default. I would like to use the REST capabilities which I alr... See more...
Hi All, I updated Splunk Universal forwarder from 8.2.6 to 9.1.3 on a Debian host. No specific configuration basically, everything by default. I would like to use the REST capabilities which I already used with the older version but this time the port is not listening, however startup says its listening. Checking mgmt port [8089]: open Netstat shows no 8089 as well. Host has no firewall, no bulls**t, just pure playground and as I said older version worked perfectly. What can be the problem, another bug in the software?
Have you tried to just use $info_sid$ in href? | eval application_name = "<a href=https://<hostname>:8000/en-US/app/search/security_events_dashboard?form.field2=&form.application_name=" . applicatio... See more...
Have you tried to just use $info_sid$ in href? | eval application_name = "<a href=https://<hostname>:8000/en-US/app/search/security_events_dashboard?form.field2=&form.application_name=" . application_name . ">" . application_name . "</a>" | eval email_subj="Security Events Alert", email_body="<p>Hello Everyone,</p><p>You are receiving this notification because the application has one or more security events reported in the last 24 hours..<br></p><p> Please click on the link available in the table to fetch events for specific application.</p> </p><p>To view splunk results <a href=https://<hostname>:8000/en-US/app/search/search?sid=".$info_sid$.">Click here</a></p>" If your scheduled search has already sent an alert, you can go to "Activities" menu and find the exact URL for that search.  I don't believe that Splunk accept anything except the dotted numerals SID.
Ah - sometimes the easy answer are the answer, but sometimes they're not! So, from what I can see of fill_summary_index.py, the dedup option isn't actually magic.  That means there's no reason you c... See more...
Ah - sometimes the easy answer are the answer, but sometimes they're not! So, from what I can see of fill_summary_index.py, the dedup option isn't actually magic.  That means there's no reason you can't just make a few minor modifications (mostly to timeframes) to just backfill the summary index manually. Indeed, there's no magic here anyway.  If fill_summary_index.py is not filling in your blank areas in the summary index correctly using the saved search from the "regular" collector for the summary index, then it seems to me that it's likely that the main search simply isn't working right anyway. The reasoning here is that when it runs 'normally', it's running over a time period and dumping its output to that summary index. This is exactly what the backfilling version does, with the only difference being that it sets a different start/end time.  Again, no magic, just searches running over time periods. So, a couple of ways forward. 1) You could provide the search and maybe  we can spot why it doesn't work right for backfilling. 2) You could craft up a "deduplication search" that you can pass to the backfill function to tell it *how* to identify which periods need backfilling.  (I don't know how to do this, but the notes for the backfill function says you can do this, so I believe it.  And of course, just because I don't know how to do it right now doesn't mean we can't help figure it out, or someone else might!) 3) Or maybe you can just manually run the search that would do the backfilling, only manually selecting the timeframes so that you don't get duplication.  I mean, I'd guess it's just a standard saved search that ends up with `| collect...` at the end.  Anyway, I do hope this helps, and maybe this bump will get someone else who does this a lot to chime in - we'll see!  
Something like <your search> host IN (*location_a*, *location_b*) | fields inbound_rate outbound_rate host | eval location = if(match(host, "location_a", "location_a", "location_b")) ``` rex is usua... See more...
Something like <your search> host IN (*location_a*, *location_b*) | fields inbound_rate outbound_rate host | eval location = if(match(host, "location_a", "location_a", "location_b")) ``` rex is usually more code-economic, split is more efficient, etc ``` | addtotals fieldname=a_TPS | timechart span=5m sum(a_TPS) as a_TPS by location | addtotals Note: I assume that HOST (all caps) is the same field as Splunk's essential field host (all lower-case), therefore accessible in your index search.  Filtering in index search is more performant.  If the HOST field is not accessible in index search, you can still use a where clause; it's just less efficient.  Also, there can be many ways to calculate location but I am showing the least efficient method because I have no details about how location is embedded into host values and what regularities they have. (In my organization, for example, location is indicated in a fixed level of domain names, therefore I do not need match or rex.) Hope this helps.
Hi Splunkers, I need a help with my dashboard because of I`m stuck in this problem. I`ve already search, tried many javascript codes and still not working. Basically what I need is:  After clickin... See more...
Hi Splunkers, I need a help with my dashboard because of I`m stuck in this problem. I`ve already search, tried many javascript codes and still not working. Basically what I need is:  After clicking in a drilldown button , the result should be a table that show me more the details about a use case. Look at the first down arrow. When I click it should show me the details. But I cannot render a table. The token to mue results should be the value os the use_case_name. My javascript code:  requirejs([ '../app/simple_xml_examples/libs/underscore-1.6.0-umd-min', 'splunkjs/mvc/tableview', 'splunkjs/mvc/chartview', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function(_, TableView, ChartView, SearchManager, mvc) { var EventSearchBasedRowExpansionRenderer = TableView.BaseRowExpansionRenderer.extend({ initialize: function() { // initialize will run once, so we will set up a search and a chart to be reused. this._searchManager = new SearchManager({ id: 'details-search-manager', preview: false }); this._tableView = new TableView({ 'managerid': 'details-search-manager', 'charting.legend.placement': 'none' }); }, canRender: function(rowData) { // Since more than one row expansion renderer can be registered we let each decide if they can handle that // data // Here we will always handle it. return true; }, render: function($container, rowData) { // rowData contains information about the row that is expanded. We can see the cells, fields, and values // We will find the sourcetype cell to use its value var use_case_nameCell = _(rowData.cells).find(function (cell) { return cell.field === 'use_Case_name'; }); //update the search with the sourcetype that we are interested in // this._searchManager.set({ search: 'index=_internal sourcetype=' + sourcetypeCell.value + ' | table user | dedup user' }); this._searchManager.set({ search: '| inputlookup XXXX.csv | search use_case_name=' + use_case_nameCell.value + ' | table XXX | transpose' }); // $container is the jquery object where we can put out content. // In this case we will render our chart and add it to the $container // $container.append(this._chartView.render().el); $container.append(this._tableView.render().el); } }); var tableElement = mvc.Components.getInstance('expand_with_events'); tableElement.getVisualization(function(tableView) { // Add custom cell renderer, the table will re-render automatically. tableView.addRowExpansionRenderer(new EventSearchBasedRowExpansionRenderer()); tableView.table.render(); }); });   Thank you guys.
You can do this with an eventstats.  The exact method can depend on data characteristics and desired output.  The following assumes that index _add search returns fewer results than index _network se... See more...
You can do this with an eventstats.  The exact method can depend on data characteristics and desired output.  The following assumes that index _add search returns fewer results than index _network search, that every snat has at least one matching client_ip, and that you want to tabulate all combinations with client_ip. (index=_ad (EventCode=4625 OR (EventCode=4771 Failure_Code=0x18)) Account_Name=JohnDoe Source_Network_Address IN (10.10.10.10 20.20.20.20)) OR (index=_network snat IN (10.10.10.10*,20.20.20.20*)) ``` get relevant data ``` | bucket span=1m _time ``` common time buckets ``` | eval Source_Network_Address1 = case(EventCode==4771, trim(Client_Address, "::ffff:")) | eval SourceIP = Source_Network_Address | eval Account_Name4625= case(EventCode=4625,mvindex(Account_Name,1)) | eval Account_Name4771= case(EventCode=4771,Account_Name) | eval Account_Name = coalesce(Account_Name4771, Account_Name4625) | eval Source_Network_Address_Port = SourceIP+":"+Source_Port | rex field=ComputerName "(?<DCName>^([^.]+))" | rename Source_Network_Address_Port as snat ``` the above applies to index _ad ``` | rex field=client "^(?<client_ip>.*?)\:(?<client_port>.*)" ``` this applies to index _network ``` | eventstats values(client_ip) as client_ip by _time snat ``` assuming index _ad search returns fewer events ``` | stats count by _time snat Account_Name EventCode DCName client_ip If client_ip could be missing for some snat and you can accept multi value client_ip, change the last stats to | stats count values(client_ip) as client_ip by _time snat Account_Name EventCode DCName If event counts are opposite, use eventstats on the other dataset. Hope this helps.
Hi @PickleRick  Yes I did, I pointed all the peer nodes to the CM which is also my License Manager. 
I would like to start encrypting traffic between the universal forwarder on my Windows devices and my single Splunk 9.x indexer that is on a Windows server. For the moment I am only concerned with ge... See more...
I would like to start encrypting traffic between the universal forwarder on my Windows devices and my single Splunk 9.x indexer that is on a Windows server. For the moment I am only concerned with getting SSL going on the indexer. I see you can also setup a certificate on the clients for authentication to the server but I want to take it one step at a time.  I have a GoDaddy cert I would like to use with the indexer and I have looked over much of the documentation on Splunk's site on all the ways you can make this configuration work but it left me confused. I can't find any mention to what to do about the public key. I see where the documentation references the server certificate and even the sslPassword in the input.conf file but no reference where to to put the key location. Is it just assumed you combine the server cert + the private key into a single pem file and if so is the order just server cert first then private key? Example:   -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN PRIVATE KEY----- ... -----END PRIVATE KEY-----