All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am looking to sum up cumulative column totals by hour in a separate column. Here is the search: index=main CompletedEvent | bin _time span=1h | stats dc(clientid) as HourlyClients by _time I wou... See more...
I am looking to sum up cumulative column totals by hour in a separate column. Here is the search: index=main CompletedEvent | bin _time span=1h | stats dc(clientid) as HourlyClients by _time I would like there to be a 2nd result column that accumulates the 1st column by hour.  For example:  the result in row 2 of the 2nd column will be the sum of rows 1 and 2 in the 1st column, the result of row 3 of the 2nd column will be the sum of rows 1 to 3 in the 1st column, etc. Thanks in advance.
Looks like the issue is the buckets have not rolled yet is there any reasons we should not roll the bucket to fix the issues? just new to the clustering and would appreciate any comments. Also shows ... See more...
Looks like the issue is the buckets have not rolled yet is there any reasons we should not roll the bucket to fix the issues? just new to the clustering and would appreciate any comments. Also shows excessive buckets issue I think this is because of the data creating to many small buckets. So in General what would be a good plan to investigate and what needs to be done to resync the data for clustering        
How to Inspect each feed by different criteria: Average ingestion rate per day, Minimum event size, 24 hour period Average event size, 24 hour period, Maximum event size, 24 hour period, Median e... See more...
How to Inspect each feed by different criteria: Average ingestion rate per day, Minimum event size, 24 hour period Average event size, 24 hour period, Maximum event size, 24 hour period, Median event size, 24 hour period Standard Deviation of event size, 24 hour period based on sourcetype or source
The query below is showing some details about ad-hoc searches. The “info” field in index=_audit has 4 possible values: completed, granted, canceled, failed. I assume ‘granted’ means that the user ... See more...
The query below is showing some details about ad-hoc searches. The “info” field in index=_audit has 4 possible values: completed, granted, canceled, failed. I assume ‘granted’ means that the user was granted access to run the query, i.e., has the permission to access the indexes, run this type of query etc. I assume ‘canceled’ means that the query was cancelled. How do we know if it was cancelled by the user (user pressed the Stop button or Pause), or if it timed out by the system because it took too long to run ? I found field fully_completed_search, but it is always true. What would be some of the reasons to get a ‘failed’ status ?
Hi , I am looking for troubleshooting steps for Data Ingestion Issue through Heavy Forwarder
I am using the CrowdStrike App in my playbook and trying to run the detonate file action. One of the required parameters is Vault ID, which supposedly the Vault ID of the file. I am not quite sure wh... See more...
I am using the CrowdStrike App in my playbook and trying to run the detonate file action. One of the required parameters is Vault ID, which supposedly the Vault ID of the file. I am not quite sure what the vault id means. 
I have two lookup table call name.csv and id.csv. both has matching field call fullname. id.csv file has id field but name.csv doesnot. but I want to add id field to name.csv. Is anyone know how to ... See more...
I have two lookup table call name.csv and id.csv. both has matching field call fullname. id.csv file has id field but name.csv doesnot. but I want to add id field to name.csv. Is anyone know how to add id field  name.csv file. I try | inputlookup name.csv | lookup id.csv fullname output id  but it didnot work.
Hello , I was testing the json extraction for a json data and its missing to index the last loop (Which is highligthed in Bold) starting from the RuntimeTasks.   Please help with the props.conf { ... See more...
Hello , I was testing the json extraction for a json data and its missing to index the last loop (Which is highligthed in Bold) starting from the RuntimeTasks.   Please help with the props.conf {     "Variant": "Tidesa",     "File": "hello.hex",      "Testgroup": "Accent",     "Results": [                             {                                   "TestModule": "CustComm_CustIf",                                   "Name": "Customer Output",                                   "TestCases": [                                                                  {                                                                       "ID": 46563,                                                                       "Title": "asfs_Signals",                                                                        "Tested": 1,                                                                         "Result": 1                                                                   },                                                                  {                                                                       "ID": 46564,                                                                       "Title": "af46564_ValueAverage",                                                                       "Tested": 1,                                                                       "Result": 1                                                                   }                                                        ]                                       }                             ],              "RuntimeTasks": null,             "BUILD_NUMBER": "306",             "JOB_BASE_NAME": "testing_PG",             "JOB_URL": "https://is-jvale.de.ats.com/Acp/job/Mks/job/RB_MS_jfaj_fajja_testing/306",              "NODE_NAME": "ast967" }
Hi, Here is a challenge that works partly as expected. On a HF I need to split syslog data to two different instances, one internal in the company, and secondly to an external company (that does ... See more...
Hi, Here is a challenge that works partly as expected. On a HF I need to split syslog data to two different instances, one internal in the company, and secondly to an external company (that does investigations on these). The external company has provided a sysmon config file, which collect all they need, but which on the other hand provides too much internally. The split of data sending two ways works great using props, transforms and outputs. Now the challenge is to remove some of the events send to the internal tcp group. props.conf   [source::XmlWinEventLog:Microsoft-Windows-Sysmon/Operational] TRANSFORMS-routing = sysmon-routeToExt, sysmon-routeToInt   transforms.conf   [sysmon-routeToInt] SOURCE_KEY = MetaData:Host REGEX = (?i)(.*) DEST_KEY=_TCP_ROUTING FORMAT=TcpOut_Int [sysmon-routeToExt] SOURCE_KEY = MetaData:Host REGEX = (?i)(.*) DEST_KEY=_TCP_ROUTING FORMAT=TcpOut_Ext   One of the basic questions is: Is there a way to use props (order vise) so it will be possible to send everything to one tcp group (external), and with the same events then filter (drop some events using REGEX), and send the remaining events to the second tcp group (internal) ?
Using the Map Rule to Technique, I select a Rule Name, then I add multiple MITRE ATT&CK Techniques. Is there a limit to the number of techniques I can map to? Is this even recommended?
Related to ITSIID-I-326  "ITSI's Episode Review shows several KPI’s such as MTTR, Episodes by Severity, Total Noise Reduction etc. which are made up by all episodes in ITSI. It would be great if th... See more...
Related to ITSIID-I-326  "ITSI's Episode Review shows several KPI’s such as MTTR, Episodes by Severity, Total Noise Reduction etc. which are made up by all episodes in ITSI. It would be great if this view was customisable so that every ITSI user only sees the KPI’s for the episodes that this user is taking care of. For example, if there are several people using ITSI - an analyst would only see MTTR or Episodes by Severity for the episodes that he or she is working on."  The dashboard in Episode Review can be customised to display different visualisation,  tables and search results.  As a proof of concept I created a similar dashbord to the original that is delivered for Episode review, but added search filters to the current logged in user.        { "dataSources": { "mttrSearch": { "options": { "query": "| tstats earliest(_time) as t1 where `itsi_notable_audit_index` activity=\"*resolved*\" [| rest /services/authentication/current-context splunk_server=local \n | eval user = username \n | fields user \n | format] by event_id \n| append \n [| tstats earliest(itsi_first_event_time) as t2 where `itsi_event_management_group_index` by itsi_group_id] \n| eval match_id=coalesce(event_id,itsi_group_id) \n| stats values(*) AS * by match_id \n| search event_id=* itsi_group_id=* \n| eval diff=t1-t2 \n| stats avg(diff) as t3 \n| eval avgDuration = round(t3/60,0) \n| fields - t3", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search", "name": "mttr" }, "episodesBySeveritySearch": { "options": { "query": "| tstats count where index=itsi_grouped_alerts sourcetype=itsi_notable:group NOT source=itsi@internal@group_closing_event NOT itsi_dummy_closing_flag=* NOT itsi_bdt_event=* by itsi_group_id \n| lookup itsi_notable_group_user_lookup _key AS itsi_group_id OUTPUT owner severity status instruction \n| search [| rest /services/authentication/current-context splunk_server=local \n | eval owner = username \n | fields owner \n | format]\n| lookup itsi_notable_group_system_lookup _key AS itsi_group_id OUTPUT title description start_time last_time is_active event_count \n| stats count as \"Count\" by severity \n| sort - severity \n| eval severity=case(severity=1,\"Information\",severity=2,\"Normal\",severity=3,\"Low\",severity=4,\"Medium\",severity=5,\"High\",severity=6,\"Critical\") \n| rename severity as \"Severity\"", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search", "name": "ebs" }, "noiseReductionSearch": { "options": { "query": "| rest /services/authentication/current-context splunk_server=local \n | eval user = username \n | table user, roles", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search", "name": "CU" }, "percentAckSearch": { "options": { "query": "| tstats count as Acknowledged where index=itsi_notable_audit activity=*acknowledged* \n [| rest /services/authentication/current-context splunk_server=local \n | eval user = username \n | fields user \n | format] \n| appendcols \n [| tstats dc(itsi_group_id) as total where index=itsi_grouped_alerts sourcetype=itsi_notable:group NOT source=itsi@internal@group_closing_event NOT itsi_dummy_closing_flag=* NOT itsi_bdt_event=*] \n| eval acknowledgedPercent=(Acknowledged/total)*100 \n| table acknowledgedPercent", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search", "name": "EACK" }, "mttaSearch": { "options": { "query": "| tstats earliest(_time) as t1 where index=itsi_notable_audit activity=\"*acknowledged*\" \n [| rest /services/authentication/current-context splunk_server=local \n | eval user = username \n | fields user \n | format] by event_id \n| append \n [| tstats earliest(itsi_first_event_time) as t2 where index=itsi_grouped_alerts sourcetype=itsi_notable:group NOT source=itsi@internal@group_closing_event NOT itsi_dummy_closing_flag=* NOT itsi_bdt_event=* by itsi_group_id] \n| eval match_id=coalesce(event_id,itsi_group_id) \n| stats values(*) AS * by match_id \n| search event_id=* itsi_group_id=* \n| eval diff=t1-t2 \n| stats avg(diff) as t3 \n| eval avgDuration = round(t3/60,0) \n| fields - t3", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search", "name": "MTTA" } }, "visualizations": { "mttr": { "title": "Mean Time to Resolve for Current User", "type": "splunk.singlevalue", "options": { "backgroundColor": "#ffffff", "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0, "unit": "minutes" }, "dataSources": { "primary": "mttrSearch" } }, "episodesBySeverity": { "title": "Episodes by Severity for Current User", "type": "splunk.bar", "options": { "backgroundColor": "#ffffff", "barSpacing": 5, "dataValuesDisplay": "all", "legendDisplay": "off", "showYMajorGridLines": false, "yAxisLabelVisibility": "hide", "xAxisMajorTickVisibility": "hide", "yAxisMajorTickVisibility": "hide", "xAxisTitleVisibility": "hide", "yAxisTitleVisibility": "hide" }, "dataSources": { "primary": "episodesBySeveritySearch" } }, "noiseReduction": { "title": "Current User", "type": "splunk.table", "context": { "backgroundColorThresholds": [ { "from": 95, "value": "#65a637" }, { "from": 90, "to": 95, "value": "#6db7c6" }, { "from": 87, "to": 90, "value": "#f7bc38" }, { "from": 85, "to": 87, "value": "#f58f39" }, { "to": 85, "value": "#d93f3c" } ] }, "dataSources": { "primary": "noiseReductionSearch" }, "showProgressBar": false, "showLastUpdated": false }, "percentAck": { "title": "Episodes Acknowledged for Current User", "type": "splunk.singlevalue", "options": { "backgroundColor": "#ffffff", "numberPrecision": 2, "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0, "unit": "%" }, "dataSources": { "primary": "percentAckSearch" } }, "mtta": { "title": "Mean Time to Acknowledged for Current User", "type": "splunk.singlevalue", "options": { "backgroundColor": "#ffffff", "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0, "unit": "minutes" }, "dataSources": { "primary": "mttaSearch" } } }, "layout": { "type": "grid", "options": { "display": "auto-scale", "height": 240, "width": 1440 }, "structure": [ { "item": "mttr", "type": "block", "position": { "x": 0, "y": 0, "w": 288, "h": 220 } }, { "item": "episodesBySeverity", "type": "block", "position": { "x": 288, "y": 0, "w": 288, "h": 220 } }, { "item": "noiseReduction", "type": "block", "position": { "x": 576, "y": 0, "w": 288, "h": 220 } }, { "item": "percentAck", "type": "block", "position": { "x": 864, "y": 0, "w": 288, "h": 220 } }, { "item": "mtta", "type": "block", "position": { "x": 1152, "y": 0, "w": 288, "h": 220 } } ] } }                
How can we limit EUM license usage. We have 100,000 Mobile App user/devices but we need to monitor 10% Mobile users (10000 ). ie 10,000 devices (Active Agents) per month as old licensing model. Cur... See more...
How can we limit EUM license usage. We have 100,000 Mobile App user/devices but we need to monitor 10% Mobile users (10000 ). ie 10,000 devices (Active Agents) per month as old licensing model. Current license model is Real User Monitoring (on-prem) - Peak Edition (EUM Unified License)
Link to post: (Issue with Management activity Logs) by Abdulkareem https://community.splunk.com/t5/All-Apps-and-Add-ons/Issue-with-Management-activity-Logs/m-p/654348#M79599 Hi all, I ... See more...
Link to post: (Issue with Management activity Logs) by Abdulkareem https://community.splunk.com/t5/All-Apps-and-Add-ons/Issue-with-Management-activity-Logs/m-p/654348#M79599 Hi all, I have successfully integrated Office 365 with Splunk and am currently receiving logs from various sources, including message trace, defender, among others. However, I have noticed an absence of management activity logs in the data. Upon further investigation, I encountered an error message located at $splunkpath/var/log/splunk/splunk_ta_o365_management_activity_*.log. I would greatly appreciate any assistance or insights into resolving this issue.2023-08-15 12:57:44,362 level=INFO pid=3567796 tid=MainThread logger=splunksdc.collector pos=collector.py:run:267 | | message="Modular input started." 2023-08-15 12:57:44,384 level=INFO pid=3567796 tid=MainThread logger=splunk_ta_o365.common.settings pos=settings.py:load:36 | datainput=b'test' start_time=1692093464 | message="Load proxy settings success." enabled=False host=b'' port=b'' username=b'' 2023-08-15 12:57:45,011 level=INFO pid=3567796 tid=MainThread logger=splunk_ta_o365.common.portal pos=portal.py:get_v2_token_by_psk:211 | datainput=b'test' start_time=1692093464 | message="Acquire access token success." expires_on=1692097064.011808 2023-08-15 12:57:45,715 level=ERROR pid=3567796 tid=MainThread logger=splunk_ta_o365.modinputs.management_activity pos=utils.py:wrapper:72 | datainput=b'test' start_time=1692093464 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_ta_o365/lib/splunksdc/utils.py", line 70, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 201, in run executor.run(adapter) File "/opt/splunk/etc/apps/splunk_ta_o365/lib/splunksdc/batch.py", line 54, in run for jobs in delegate.discover(): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 690, in discover session = self._get_session() File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 361, in _get_session self._enable_subscription(session) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 372, in _enable_subscription self._subscription.start(session) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 386, in start response = self._perform(session, "POST", "/subscriptions/start", params) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 419, in _perform return self._request(session, method, url, kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 453, in _request raise O365PortalError(response) splunk_ta_o365.common.portal.O365PortalError: 400:{"error":{"code":"StartSubscription [CorrId=23540c54-3a29-4142-af13-0ce9a78eb47c][TenantId=*******,ContentType=Audit.AzureActiveDirectory,ApplicationId=*******,PublisherId=*******][AppId","message":"9b7ccc6-ce29-48bf-9dec-12384684ee5c] failed. Exception: Microsoft.Office.Compliance.Audit.DataServiceException: Tenant ******* does not exist.\r\n at Microsoft.Office.Compliance.Audit.API.AzureManager.<GetSubscriptionTableClientForTenantAsync>d__52.MoveNext() in d:\\dbs\\sh\\nibr\\0811_070645\\cmd\\e\\sources\\dev\\auditing\\src\\auditapiservice\\common\\AzureManager.cs:line 2113\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at Microsoft.Office.Compliance.Audit.API.AzureManager.<GetAPISubscriptionAsync>d__22.MoveNext() in d:\\dbs\\sh\\nibr\\0811_070645\\cmd\\e\\sources\\dev\\auditing\\src\\auditapiservice\\common\\AzureManager.cs:line 549\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at Microsoft.Office.Compliance.Audit.API.StartController.<StartSubscription>d__0.MoveNext() in d:\\dbs\\sh\\nibr\\0811_070645\\cmd\\19\\sources\\dev\\auditing\\src\\auditapiservice\\apifrontendservicerole\\Controllers\\StartController.cs:line 76"}} 2023-08-15 12:57:45,719 level=INFO pid=3567796 tid=MainThread logger=splunksdc.collector pos=collector.py:run:270 | | message="Modular input exited."   This message has 0 replies
Hello everyone, I have the below fields and I want the search to generate only the results when Previous_Time and New_Time difference is more than 5s: _time host EventCode ... See more...
Hello everyone, I have the below fields and I want the search to generate only the results when Previous_Time and New_Time difference is more than 5s: _time host EventCode EventCodeDescription Name Previous_Time New_Time Tue Aug 15 09:35:01 2023 hostname 4616 The system time was changed. C:\Program Files (x86)\TrueTime\WinSync\WinSync.exe ‎2023‎-‎08‎-‎15T07:35:01.152758200Z ‎2023‎-‎08‎-‎15T07:35:01.152000000Z Thank you.
Hi all,  we want to use Splunk Synthetic Transaction Monitoring in Splunk Observability Cloud. So we have an Account and set up some synthetic monitors.  We want to query the results of these synth... See more...
Hi all,  we want to use Splunk Synthetic Transaction Monitoring in Splunk Observability Cloud. So we have an Account and set up some synthetic monitors.  We want to query the results of these synthetical from our Splunk Entprise On-Premise platform. For security reasons there is only a connect allowed from on-prem to Splunk Observability Cloud.  On our Test Search Head I installed the Splunk Synthetic Monitoring Add-On (https://splunkbase.splunk.com/app/5608) and have to configure an access-token for the Observability Cloud.  Which connections do I have to enable in our firewalls ?  Where do I configure the Access in Observability Cloud?    Thanks in advance for any help.    Regards Sascha   
Hi Splunk Experts, I've different XML request(100+ requests) as a multi-line event. Is it possible to stat these requests and get their count. But all these request would have any values between the... See more...
Hi Splunk Experts, I've different XML request(100+ requests) as a multi-line event. Is it possible to stat these requests and get their count. But all these request would have any values between their tags and rex-ing all these request to stat them would be a difficult task, but is there any possible way to achieve this criteria. Any suggestion would be very much helpful!!. Thanks in advance!!
Can anyone tell me why my table doesn't display the redirect_uri?   index=keycloak customerReferenceAccountId!=SERVICE* username!=test*@test.co.uk type=LOGIN* | stats count(eval(type="LOGIN")) as... See more...
Can anyone tell me why my table doesn't display the redirect_uri?   index=keycloak customerReferenceAccountId!=SERVICE* username!=test*@test.co.uk type=LOGIN* | stats count(eval(type="LOGIN")) as successful_login count(eval(type="LOGIN_ERROR")) as login_error by username, ipAddress | eval percentage_failure=((successful_login/login_error)*100) | eval percentage_failure=round('percentage_failure', 2) | where successful_login>0 AND login_error>7 | table username, ipAddress, redirect_uri, successful_login, login_error, percentage_failure  
Hi, I would like to add alert name and its triggered time to a lookup file once the alert is triggered. I don't need the results instead alert name and triggered time would do. Basically, need thi... See more...
Hi, I would like to add alert name and its triggered time to a lookup file once the alert is triggered. I don't need the results instead alert name and triggered time would do. Basically, need this data for reporting purpose. I am aware that this can be taken using Triggered alerts and using rest API or get the data from audit index. When I use rest API for triggered alerts, triggered time is not there and for the audit index, only admin has access. So, trying to do something while the alert is getting triggered.
Hello all, I am going to upgrade to Splunk to version 9.1.x. Inside my app I use JS which does the translation of the page using i18n. When checking the jquery scan, I get the message: "Thi... See more...
Hello all, I am going to upgrade to Splunk to version 9.1.x. Inside my app I use JS which does the translation of the page using i18n. When checking the jquery scan, I get the message: "This /path/to/js/file.js is importing the following dependencies which are not supported or externally documented by Splunk. splunk.i18n " Does anyone have a solution to this problem or if Splunk can't do i18n in JS anymore..., how do you translate your dashboards? Any hints are appreciated.   Kind regards, Marie
I am developing a custom dashboard with a table created from XML. The table has an id of "summary_table" which I am calling to extend with custom cell renderer.  The issue is that the table doesn't... See more...
I am developing a custom dashboard with a table created from XML. The table has an id of "summary_table" which I am calling to extend with custom cell renderer.  The issue is that the table doesn't render the extended view of the table on first load. But if I click on the table header - which will trigger a sort function, the extended view works. Same thing when I click on a page from pagination view.  This is the view when I first load the dashboard.  (DATA SHOWN BELOW ARE ALL MOCK DATA) And this is the view when I click on any of the table headers. Which should have been rendered on first load. I am thinking that the table didn't re-rendered again after adding the new BaseCellRenderer.    here's my code.            var CustomCellRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { // Enable this custom cell renderer for the confirm field return cell.field === 'Expiry_Extension' }, render: function($td, cell) { console.log(cell.value) if(cell.value != ""){ const extension = cell.value.split("|")[0] const expiration_date = cell.value.split("|")[1] const order_id = cell.value.split("|")[2] let html = `` let button_html = ` <button class="extend_expiry btn-sm btn btn-primary"><i class="icon icon-plus-circle"></i></button>` if(extension == 'null'){ html += button_html } else{ html += extension html += button_html } $td.html(html) } } }); // Create an instance of the custom cell renderer var myCellRenderer = new CustomCellRenderer(); setTimeout(function(){ mvc.Components.get("summary_table").getVisualization(function(tableView) { tableView.addCellRenderer(myCellRenderer); tableView.table.render() console.log("rendered") }); },1000) }