All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The docs on the streamstats command say that "all accumulated statistics" are reset on reset_* options. That would imply that the reset is global, not on a per "by-field(s)" basis. It could call for... See more...
The docs on the streamstats command say that "all accumulated statistics" are reset on reset_* options. That would imply that the reset is global, not on a per "by-field(s)" basis. It could call for docs feedback to make it more explicitly stated. The practical solution to this you already got from @ITWhisperer
It's not a very good search to begin with (unneeded multisearch and wildcard-beginning search terms) so maybe show a sample (anonymized if needed) of your data and a description of what you need to g... See more...
It's not a very good search to begin with (unneeded multisearch and wildcard-beginning search terms) so maybe show a sample (anonymized if needed) of your data and a description of what you need to get from it. That might be easier than "fixing" this one.
As @tscroggins said - Splunk clusters are not active-passive setups. One could think of some duct-tape setups with limiting network connectivity to certain times of day but that would make the cluste... See more...
As @tscroggins said - Splunk clusters are not active-passive setups. One could think of some duct-tape setups with limiting network connectivity to certain times of day but that would make the cluster as a whole appear severely degraded. You could think of a "outside Splunk" replication of servers' state but that's tricky and not really supported. If you have some specific business needs, consult them with either Splunk Presales team or your friendly local Splunk Partner,
I need to display priority data for 7 days with the percentage, however am unable to display it in 7 days. My below query works for a days search but doesn't displays for 7 days. Could you please hel... See more...
I need to display priority data for 7 days with the percentage, however am unable to display it in 7 days. My below query works for a days search but doesn't displays for 7 days. Could you please help with fixing the query. Below is my query. | multisearch [ search index=myindex source=mysoruce "* from *" earliest=-7d@d latest=@d | fields TRN, tomcatget, Queue ] [ search index=myindex source=mysoruce *sent* earliest=-7d@d latest=@d | fields TRN, TimeMQPut, Status] [ search index=myindex source=mysoruce *Priority* earliest=-7d@d latest=@d | fields TRN,Priority ] | stats values(*) as * by TRN | eval PPut=strptime(tomcatput, "%y%m%d %H:%M:%S") | eval PGet=strptime(tomcatget,"%y%m%d %H:%M:%S") | eval tomcatGet2tomcatPut=round((PPut-PGet),0) | fillnull value="No_tomcatPut_Time" tomcatput | fillnull value="No_tomcatGet_Time" tomcatget | table TRN, Queue, BackEndID, Status, Priority, tomcatget, tomcatput, tomcatGet2tomcatPut | eval E2E_5min=if(tomcatGet2tomcatPut<=300,1,0) | eval E2E_20min=if(tomcatGet2tomcatPut>300 and tomcatGet2tomcatPut<=1200,1,0) | eval E2E_50min=if(tomcatGet2tomcatPut>1200 and tomcatGet2tomcatPut<=3000,1,0) | eval E2EGT50min=if(tomcatGet2tomcatPut>3000,1,0) | eval Total = E2E_5min + E2E_20min + E2E_50min + E2EGT50min | stats sum(E2E_5min) as sum_5min sum(E2E_20min) as sum_20min sum(E2E_50min) as sum_50min sum(E2EGT50min) as sum_50GTmin sum(Total) as sum_total by Priority | eval bad = if(Priority="High", sum_20min + sum_50min + sum_50GTmin, if(Priority="Medium", sum_50min + sum_50GTmin, if(Priority="Low", sum_50GTmin, null()))) | eval good = if(Priority="High", sum_5min, if(Priority="Medium", sum_5min + sum_20min, if(Priority="Low", sum_5min+ sum_20min + sum_50min, null()))) | eval per_cal = if(Priority="High", (good / sum_total) * 100, if(Priority="Medium", (good / sum_total) * 100, if(Priority="Low", (good / sum_total) * 100, null()))) | table Priority per_cal looking to get output in below format.  
Apart from the direct technical answer - you can't have two same settings (two FORMAT entries) in the same stanza. The latter overwrittes the former. But there are more issues here - why are you try... See more...
Apart from the direct technical answer - you can't have two same settings (two FORMAT entries) in the same stanza. The latter overwrittes the former. But there are more issues here - why are you trying to use index-time extractions in the first place?
Connection timeout means that your end tried to establish connection with the destinations server (api.securityserver.microsoft.com) but didn't get any response. This typically means network-level pr... See more...
Connection timeout means that your end tried to establish connection with the destinations server (api.securityserver.microsoft.com) but didn't get any response. This typically means network-level problems (like lack of proper firewall rules allowing outgoing traffic) or (actually it's the same thing but pushed one step further) not configured proxy server when direct outgoing traffic is forbidden.
Hi @MichaelBs, If you're using Curl search, the command should automatically convert a body containing an array/list into separate events. The RIPEstat Looking Glass API returns a single object and ... See more...
Hi @MichaelBs, If you're using Curl search, the command should automatically convert a body containing an array/list into separate events. The RIPEstat Looking Glass API returns a single object and multiple rrcs items in the data field:   | curl url="https://stat.ripe.net/data/looking-glass/data.json?resource=1.1.1.1" { "messages": [ [ "info", "IP address (1.1.1.1) has been converted to its encompassing routed prefix (1.1.1.0/24)" ] ], "see_also": [], "version": "2.1", "data_call_name": "looking-glass", "data_call_status": "supported", "cached": false, "data": { "rrcs": [ ... ], "query_time": "2024-06-30T17:24:44", "latest_time": "2024-06-30T17:24:29", "parameters": { "resource": "1.1.1.0/24", "look_back_limit": 86400, "cache": null } }, "query_id": "20240630172444-e3bf9bf6-dd38-4cff-aa4b-e78b33f1a2c3", "process_time": 70, "server_id": "app111", "build_version": "live.2024.6.24.207", "status": "ok", "status_code": 200, "time": "2024-06-30T17:24:44.525141" }   You return rrcs items as individual events with various combinations of spath, mvexpand, eval, etc.:   | fields data | spath input=data path="rrcs{}" output=rrcs | fields rrcs | mvexpand rrcs | eval rrc=spath(rrcs, "rrc"), location=spath(rrcs, "location"), peers=spath(rrcs, "peers{}") | fields rrc location peers | mvexpand peers | spath input=peers | fields - peers   For experimentation, I recommend storing the data in a lookup file to limit the number of calls you make to stat.ripe.net. First search:   | curl url="https://stat.ripe.net/data/looking-glass/data.json?resource=1.1.1.1" | outputlookup ripenet_looking_glass.csv   Subsequent searches:   | inputlookup ripenet_looking_glass.csv | fields data ``` ... ```    
Hi folks, I am trying to get Defender logs into the  Splunk Add-On for Microsoft Security but I am struggling a bit. It "appears" to be configured correctly but I am seeing this error in the logs: ... See more...
Hi folks, I am trying to get Defender logs into the  Splunk Add-On for Microsoft Security but I am struggling a bit. It "appears" to be configured correctly but I am seeing this error in the logs: ERROR pid=222717 tid=MainThread file=ms_security_utils.py:get_atp_alerts_odata:261 | Exception occurred while getting data using access token : HTTPSConnectionPool(host=&#x27;api.securitycenter.microsoft.com&#x27;, port=443): Max retries exceeded with url: /api/alerts?$expand=evidence&amp;$filter=lastUpdateTime+gt+2024-05-22T12:34:35Z (Caused by ConnectTimeoutError(&lt;urllib3.connection.HTTPSConnection object at 0x7fe514fa1bd0&gt;, &#x27;Connection to api.securitycenter.microsoft.com timed out. (connect timeout=60)&#x27;)) Is this an issue with the way the Azure Connector App is permissioned or something else entirely? Thanks in advance
I Have used the below two events to test the SOURCE_KEY =     <132>1 2023-12-24T09:48:05+00:00 DCSECIDKOASV02 ikeyserver 8244 - [meta sequenceId="2850227"] {Warning}, {RADIUS}, {W-006001}, {An i... See more...
I Have used the below two events to test the SOURCE_KEY =     <132>1 2023-12-24T09:48:05+00:00 DCSECIDKOASV02 ikeyserver 8244 - [meta sequenceId="2850227"] {Warning}, {RADIUS}, {W-006001}, {An invalid RADIUS packet has been received.}, {0x0C744774DF59FC530462C92D2781B102}, {Source Location:10.240.86.6:1812 (Authentication)}, {Client Location:10.240.86.18:42923}, {Reason:The packet is smaller than minimum size allowed for RADIUS}, {Request ID:101}, {Input Details:0x64656661756C742073656E6420737472696E67}, {Request Type:Indeterminate} <132>1 2023-12-24T09:48:05+00:00 DCSECIDKOASV02 ikeyserver 8244 - [meta sequenceId="2850228"] {Warning}, {RADIUS}, {W-006001}, {An invalid RADIUS packet has been received.}, {0xBA42228CB3604ECFDEEBC274D3312187}, {Source Location:10.240.86.6:1812 (Authentication)}, {Client Location:10.240.86.19:18721}, {Reason:The packet is smaller than minimum size allowed for RADIUS}, {Request ID:101}, {Input Details:0x64656661756C742073656E6420737472696E67}, {Request Type:Indeterminate}   Using the below Regex: [xmlExtractionIDX] REGEX = .*?"]\s+\{(?<Severity>\w+)\},\s+\{\w+\},\s+\{(?<DeviceID>[^}]*)\},(.*) FORMAT = Severity::$1 DeviceID::$2 Last_Part::$3 WRITE_META = true   till that it's working fine then i want to add more precise extraction and want to extarct more info from the Last_Part field using the SOURCE_KEY =    [xmlExtractionIDX] REGEX = .*?"]\s+\{(?<Severity>\w+)\},\s+\{\w+\},\s+\{(?<DeviceID>[^}]*)\},(.*) FORMAT = Severity::$1 DeviceID::$2 Last_Part::$3 SOURCE_KEY = MetaData:Last_Part REGEX = Reason:(.*?)\} FORMAT = Reason::$1 WRITE_META = true   But it doesn't work now, Is there any advice to do that using SOURCE_KEY     
Hi @Nraj87, "Site A" should be read as "Site 1," and "Site N" should be read as "Site 2, Site 2, Site 3, ..., Site N." Splunk indexer clustering isn't active-passive; however, you can use site sett... See more...
Hi @Nraj87, "Site A" should be read as "Site 1," and "Site N" should be read as "Site 2, Site 2, Site 3, ..., Site N." Splunk indexer clustering isn't active-passive; however, you can use site settings to limit forwarding and search to Site 1 and configure cluster replication to copy all data to Site 2. Site 1 should also host the majority of SHC members. If Site 2 is down, your global SHC load balancing solution should direct users to Site 1, and your indexer cluster will in theory queue replication tasks until Site 2 is up. Your cluster would appear unhealthy whenever Site 2 is down. If you're using SmartStore, the utility of Site 2 is limited.  Only hot buckets will be replicated, so in your case only hot buckets open between 01:00 and the time Site 2 goes offline will be replicated. Your object storage solution should be geographically distributed, and indexers in Site 2 would pull warm buckets from remote storage as needed; however, if you're not actively searching Site 2, there would be little work for Site 2 to do. Have you consulted a Splunk presales team? They're better equipped than Splunk Answers to evaluate your business needs and determine whether an M4/M14 architecture meets your requirements.
You could try sorting by username before the streamstats
You can use the sub-search method where the sub-search would detect password spray and pass the attempted source_ip and username combination to the main search which detects the successful logins. ... See more...
You can use the sub-search method where the sub-search would detect password spray and pass the attempted source_ip and username combination to the main search which detects the successful logins. index = auth0 <successful login condition here> [ search index = auth0 <login attempts condition here> | eventstats dc(data.user_name) as unique_accounts by data.ip | where unique_accounts>10 | table data.ip,data.user_name ] | stats count by data.user_name,data.ip,<result,_time> #NOTE: <...> variable block  
Hi Team, What I'm trying to achieve: Find the consecutive failure events followed by a success event.   | makeresults | eval _raw="username,result user1,fail user2,success user3,success user1,... See more...
Hi Team, What I'm trying to achieve: Find the consecutive failure events followed by a success event.   | makeresults | eval _raw="username,result user1,fail user2,success user3,success user1,fail user1,fail user1,success user2,fail user3,success user2,fail user1,fail" | multikv forceheader=1 | streamstats count(eval(result="fail")) as fail_counter by username,result reset_after="("result==\"success\"")" | table username,result,fail_counter   Outcome: The counter (fail_counter) gets reset for a user (say user1) if the next event is a success event for a different user (say, user2). username result fail_counter   user1 fail 1   user2 success 0   user3 success 0   user1 fail 1 <- counter reset for user1. It should be 2. user1 fail 2 It should be 3. user1 success 0   user2 fail 1   user3 success 0   user2 fail 1   user1 fail 1   Expected: The counter should not reset if the success event for user2 follows the failure event for user1. I would appreciate any help on this. Not sure what I'm missing here.
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <title>Indicators lookup</title> <link rel="shortcut icon" href="/en-US/... See more...
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <title>Indicators lookup</title> <link rel="shortcut icon" href="/en-US/static/@4EAB67D171EF0F268537B07472A600B957BA5791405CEF63525C4E27D3E44D74/img/favicon.ico" /> <link rel="stylesheet" type="text/css" href="{{SPLUNKWEB_URL_PREFIX}}/static/build/css/bootstrap-enterprise.css" /> <link rel="stylesheet" type="text/css" href="{{SPLUNKWEB_URL_PREFIX}}/static/build/css/splunkjs-dashboard.css" /> <meta name="referrer" content="never" /> <meta name="referrer" content="no-referrer" /> <script> (function () { window._splunk_metrics_events = []; window._splunk_metrics_events.active = true; function onLoadSwa() { new SWA({"deploymentID": "38255b58-9e6d-5dfa-8d85-b49d0a12bf26", "userID": "88dc5753653ca0684944db586ae6839c9919d788bf6408b4cca2a8e79949adac", "version": "4", "instanceGUID": "2255A6A4-6BBF-4F0C-8DE8-ABF5E956C44B", "visibility": "anonymous,support", "url": "https://e1345286.api.splkmobile.com/1.0/e1345286"}); }; document.addEventListener("DOMContentLoaded", function(event) { var s = document.createElement('script'); s.type = 'text/javascript'; s.async = true; s.src='/en-US/static/app/splunk_instrumentation/build/pages/swa.js'; s.addEventListener('load', onLoadSwa); var x = document.getElementsByTagName('script')[0]; x.parentNode.insertBefore(s, x); }); }()); </script> </head> <body class="simplexml preload locale-en" data-splunk-version="8.0.2.1" data-splunk-product="enterprise"> <!-- BEGIN LAYOUT This section contains the layout for the dashboard. Splunk uses proprietary styles in <div> tags, similar to Bootstrap's grid system. --> <header> <a aria-label="Screen reader users, click here to skip the navigation bar" class="navSkip" href="#navSkip" tabIndex="1">Skip Navigation &gt;</a> <div class="header splunk-header"> <div id="placeholder-splunk-bar"> <a href="{{SPLUNKWEB_URL_PREFIX}}/app/launcher/home" class="brand" title="splunk &gt; listen to your data">splunk<strong>&gt;</strong></a> </div> <div id="placeholder-app-bar"></div> </div> <a id="navSkip"></a> </header> <div class="dashboard-body container-fluid main-section-body" data-role="main"> <div class="dashboard-header clearfix"> <h2>Indicators lookup</h2> </div> <div class="fieldset"> <div> <legend><b>Kaspersky CyberTrace connection settings</b></legend> <div class="kl-panel-body"> <div class="input input-text" id="settings-ct-host"> <label>Kaspersky CyberTrace address</label> </div> <div class="input input-text" id="settings-ct-port"> <label>Kaspersky CyberTrace port</label> </div> </div> </div> <div></div> <div id="input1" class="html"> <div class="fieldset"> <legend><b>Kaspersky CyberTrace lookup indicator</b></legend> <div class="panel-body html"> <p>Enter indicator, that you want to lookup in the Kaspersky CyberTrace</p> </div> </div> <div class="input input-text" id="input2"> <label>Indicator</label> </div> <div class="form-submit" id="search_btn"> <button class="btn btn-primary submit">Lookup</button> </div> </div> </div> <div id="row1" class="dashboard-row dashboard-row1"> <div id="panel1" class="dashboard-cell" style="width: 100%;"> <div class="dashboard-panel clearfix"> <div class="panel-element-row"> <div id="element1" class="dashboard-element table" style="width: 100%"> <div class="panel-head"> <h3>Matching events</h3> </div> <div class="panel-body"></div> </div> </div> </div> </div> </div> </div> <!-- END LAYOUT --> <script src="{{SPLUNKWEB_URL_PREFIX}}/config?autoload=1" crossorigin="use-credentials"></script> <script src="{{SPLUNKWEB_URL_PREFIX}}/static/js/i18n.js"></script> <script src="{{SPLUNKWEB_URL_PREFIX}}/i18ncatalog?autoload=1"></script> <script src="{{SPLUNKWEB_URL_PREFIX}}/static/build/simplexml/index.js"></script> <script type="text/javascript"> // <![CDATA[ // <![CDATA[ // // LIBRARY REQUIREMENTS // // In the require function, we include the necessary libraries and modules for // the HTML dashboard. Then, we pass variable names for these libraries and // modules as function parameters, in order. // // When you add libraries or modules, remember to retain this mapping order // between the library or module and its function parameter. You can do this by // adding to the end of these lists, as shown in the commented examples below. require([ "splunkjs/mvc", "splunkjs/mvc/utils", "splunkjs/mvc/tokenutils", "underscore", "jquery", "splunkjs/mvc/simplexml", "splunkjs/mvc/layoutview", "splunkjs/mvc/simplexml/dashboardview", "splunkjs/mvc/simplexml/dashboard/panelref", "splunkjs/mvc/simplexml/element/chart", "splunkjs/mvc/simplexml/element/event", "splunkjs/mvc/simplexml/element/html", "splunkjs/mvc/simplexml/element/list", "splunkjs/mvc/simplexml/element/map", "splunkjs/mvc/simplexml/element/single", "splunkjs/mvc/simplexml/element/table", "splunkjs/mvc/simplexml/element/visualization", "splunkjs/mvc/simpleform/formutils", "splunkjs/mvc/simplexml/eventhandler", "splunkjs/mvc/simplexml/searcheventhandler", "splunkjs/mvc/simpleform/input/dropdown", "splunkjs/mvc/simpleform/input/radiogroup", "splunkjs/mvc/simpleform/input/linklist", "splunkjs/mvc/simpleform/input/multiselect", "splunkjs/mvc/simpleform/input/checkboxgroup", "splunkjs/mvc/simpleform/input/text", "splunkjs/mvc/simpleform/input/timerange", "splunkjs/mvc/simpleform/input/submit", "splunkjs/mvc/searchmanager", "splunkjs/mvc/savedsearchmanager", "splunkjs/mvc/postprocessmanager", "splunkjs/mvc/simplexml/urltokenmodel" // Add comma-separated libraries and modules manually here, for example: // ..."splunkjs/mvc/simplexml/urltokenmodel", // "splunkjs/mvc/tokenforwarder" ], function( mvc, utils, TokenUtils, _, $, DashboardController, LayoutView, Dashboard, PanelRef, ChartElement, EventElement, HtmlElement, ListElement, MapElement, SingleElement, TableElement, VisualizationElement, FormUtils, EventHandler, SearchEventHandler, DropdownInput, RadioGroupInput, LinkListInput, MultiSelectInput, CheckboxGroupInput, TextInput, TimeRangeInput, SubmitButton, SearchManager, SavedSearchManager, PostProcessManager, UrlTokenModel // Add comma-separated parameter names here, for example: // ...UrlTokenModel, // TokenForwarder ) { var pageLoading = true; // // TOKENS // // Create token namespaces var urlTokenModel = new UrlTokenModel(); mvc.Components.registerInstance('url', urlTokenModel); var defaultTokenModel = mvc.Components.getInstance('default', {create: true}); var submittedTokenModel = mvc.Components.getInstance('submitted', {create: true}); urlTokenModel.on('url:navigate', function() { defaultTokenModel.set(urlTokenModel.toJSON()); if (!_.isEmpty(urlTokenModel.toJSON()) && !_.all(urlTokenModel.toJSON(), _.isUndefined)) { submitTokens(); } else { submittedTokenModel.clear(); } }); // Initialize tokens defaultTokenModel.set(urlTokenModel.toJSON()); function submitTokens() { // Copy the contents of the defaultTokenModel to the submittedTokenModel and urlTokenModel FormUtils.submitForm({ replaceState: pageLoading }); } function setToken(name, value) { defaultTokenModel.set(name, value); submittedTokenModel.set(name, value); } function unsetToken(name) { defaultTokenModel.unset(name); submittedTokenModel.unset(name); } setToken("run",false); // // SEARCH MANAGERS // var search1 = new SearchManager({ "id": "search1", "earliest_time": "-30d@d", "latest_time": "now", "autostart": false, "sample_ratio": null, "search": "| klsearch $indicator$ | rename _raw as DetectedEvent | table LookupIndicator, DetectedEvent", "status_buckets": 0, "cancelOnUnload": true, "app": utils.getCurrentApp(), "auto_cancel": 90, "preview": true, "tokenDependencies": { }, "runWhenTimeIsUndefined": false }, {tokens: true, tokenNamespace: "submitted"}); // // SERVICE OBJECT // // Create a service object REST request using the Splunk SDK for JavaScript var service = mvc.createService({ owner: 'nobody' }); setToken('update', false); service.get( 'storage/collections/data/kl_cybertrace_settings/', null, function(err, result) { if (err) { console.error(err); } else { if (result.data.length > 0) { setToken('update', true); setToken('_key', result.data[0]._key); setToken('KTCHost', result.data[0].KTCHost || '127.0.0.1'); setToken('KTCPort', result.data[0].KTCPort || 9999); } } }); // // SPLUNK LAYOUT // $('header').remove(); new LayoutView({"hideChrome": false, "hideSplunkBar": false, "hideAppBar": false}) .render() .getContainerElement() .appendChild($('.dashboard-body')[0]); // // DASHBOARD EDITOR // new Dashboard({ id: 'dashboard', el: $('.dashboard-body'), showTitle: true, editable: true }, {tokens: true}).render(); // // VIEWS: VISUALIZATION ELEMENTS // var element1 = new TableElement({ "id": "element1", "count": 50, "managerid": "search1", "el": $('#element1') }, {tokens: true, tokenNamespace: "submitted"}).render(); element1.on("click", function(e) { if (e.field !== undefined) { e.preventDefault(); var url = TokenUtils.replaceTokenNames("http://tip.kaspersky.com/search?searchString=$click.value$", _.extend(submittedTokenModel.toJSON(), e.data), TokenUtils.getEscaper('url'), TokenUtils.getFilters(mvc.Components)); utils.redirect(url, false, "_blank"); } }); // // VIEWS: FORM INPUTS // var settingsktcaddr = new TextInput({ "id": "settings-ct-host", "searchWhenChanged": false, "default": "127.0.0.1", "value": "$KTCHost$", "el": $('#settings-ct-host') }, {tokens: true}).render(); var settingsktcport = new TextInput({ "id": "settings-ct-port", "searchWhenChanged": false, "default": "9999", "value": "$KTCPort$", "el": $('#settings-ct-port') }, {tokens: true}).render(); var input1 = new HtmlElement({ "id": "input1", "useTokens": true, "el": $('#input1') }, {tokens: true, tokenNamespace: "submitted"}).render(); DashboardController.addReadyDep(input1.contentLoaded()); input1.on("change", function(newValue) { FormUtils.handleValueChange(input1); }); var input2 = new TextInput({ "id": "input2", "searchWhenChanged": true, "default": "", "value": "$indicator$", "el": $('#input2') }, {tokens: true}).render(); input2.on("change", function(newValue) { FormUtils.handleValueChange(input2); }); // // SUBMIT FORM DATA // var submit = new SubmitButton({ id: 'submit', el: $('#search_btn'), text: 'Lookup' }, {tokens: true}).render(); submit.on("submit", function() { var collectionEndpoint, headers, method, record, tokens; tokens = mvc.Components.getInstance('default'); if (tokens.get('KTCHost') === "" ){ alert ("Incorrect value for Kaspersky CyberTrace address"); return; } if (tokens.get('KTCPort') === "" ){ alert ("Incorrect value for Kaspersky CyberTrace port"); return; } record = { KTCHost: tokens.get('KTCHost'), KTCPort: tokens.get('KTCPort') }; /* add/update record */ headers = { 'Content-Type': 'application/json' }; if (tokens.get('update')) { collectionEndpoint = 'storage/collections/data/kl_cybertrace_settings/' + tokens.get('_key'); } else { collectionEndpoint = 'storage/collections/data/kl_cybertrace_settings'; } method = 'POST'; service.request(collectionEndpoint, method, null, null, JSON.stringify(record), headers, null) .done(function(data) { data = JSON.parse(data); if (data._key) { submitTokens(); search1.startSearch(); } else { console.error('no key returned in data', data._key); } }) .fail(function(data) { console.error(data.statusText); statusMessage('error', data.statusText); }); }); // Initialize time tokens to default if (!defaultTokenModel.has('earliest') && !defaultTokenModel.has('latest')) { defaultTokenModel.set({ earliest: '0', latest: '' }); } if (!_.isEmpty(urlTokenModel.toJSON())){ submitTokens(); } // // DASHBOARD READY // DashboardController.ready(); pageLoading = false; } ); // ]]> </script> </body> </html>
Hi everyone, I'm currently working on integrating Kaspersky CyberTrace with Splunk and have encountered a couple of issues I need help with: Converting Indicators Lookup Dashboard: I successful... See more...
Hi everyone, I'm currently working on integrating Kaspersky CyberTrace with Splunk and have encountered a couple of issues I need help with: Converting Indicators Lookup Dashboard: I successfully converted the "Kaspersky CyberTrace Matches" and "Kaspersky CyberTrace Status" dashboards to Dashboard Studio. However, the "Indicators Lookup" dashboard does not have an option for conversion and throws an error when I try to open it: "HTML Dashboards are no longer supported. You can recreate your HTML dashboard using Dashboard Studio." The code for this dashboard is quite extensive. Does anyone have any suggestions or best practices on how to convert it to Dashboard Studio effectively? Data Not Displaying in Dashboards: Even though data is being received from Kaspersky and stored in the Main index, the dashboards are not displaying any information. Has anyone faced similar issues or could provide insight into what might be going wrong here? Any guidance or solutions to these problems would be greatly appreciated. Thanks in advance for your help!
In Distributed Clustered Deployment with SHC - Multisite (M4 / M14) model, is there any additional license required ? Is there any downtime require while connection Site A with Site N with Multisite... See more...
In Distributed Clustered Deployment with SHC - Multisite (M4 / M14) model, is there any additional license required ? Is there any downtime require while connection Site A with Site N with Multisite (M4 / M14)  model ?  
Dear All, Is there any delay option in Splunk multisite M4/M14? Requirement:  Site A is Active site and Site N passive site. Data ingestion from Active site should be in real time and data from sit... See more...
Dear All, Is there any delay option in Splunk multisite M4/M14? Requirement:  Site A is Active site and Site N passive site. Data ingestion from Active site should be in real time and data from site N would be Ingest at 1 AM every day.  Is there any option in mu
It looks like you are trying to find the app.name for the parent_span_id? To avoid using joins, try something like this: index=your_index sourcetype=your_sourcetype | fields trace_id, span_id, paren... See more...
It looks like you are trying to find the app.name for the parent_span_id? To avoid using joins, try something like this: index=your_index sourcetype=your_sourcetype | fields trace_id, span_id, parent_span_id,app.name | rename app.name as current_service | eval join_id=parent_span_id | appendpipe [| rename current_service as parent_service | eval join_id = span_id] | eventstats values(parent_service) as parent_service by join_id trace_id | where isnotnull(current_service) | table trace_id parent_service current_service If this isn't correct, please share some anonymised, but representative raw events and a description of what it is you are trying to do
If you had bucket problems, the error would come from another component Normally with stale fishbucket entries and similar problems you'd simply remove a particular entry from the fishbucket but ... See more...
If you had bucket problems, the error would come from another component Normally with stale fishbucket entries and similar problems you'd simply remove a particular entry from the fishbucket but in this case it looks that the fishbucket database itself is damaged so you probably need to remove whole fishbucket - stop splunkd, remove var/lib/splunk/fishbucket, start splunkd You might want to fiddle with the db first. If I remember correctly (but I wouldn't bet any money on it), it's a berkeleydb so it might be possible to repair it.
Indeed this error is related to ingestion failure.  I just thought the BTree problem is inside the buckets.  To clean up, do I just delete everything in fishbucket?  Reingestion is not a problem, but... See more...
Indeed this error is related to ingestion failure.  I just thought the BTree problem is inside the buckets.  To clean up, do I just delete everything in fishbucket?  Reingestion is not a problem, but I do not want to cause other behavior changes.