All Topics

Top

All Topics

Hello, I am rather new to Splunk and I have two values that I want to search for, but I want to combine them to one name in multiselect option (easier for user).    There are two coded values for... See more...
Hello, I am rather new to Splunk and I have two values that I want to search for, but I want to combine them to one name in multiselect option (easier for user).    There are two coded values for offsite sample and onsite samples which are  offsite samples = "OffsiteComplete" and "OffsiteNeedsReview" onsite samples = "QA Approved" and "Analysis Complete" I have tried to add these both as values and use OR as delimiter but no success.  Please help!  Code is below and screenshot   <form> <label>LIMS Analytical Data</label> <description>Select Year Range First</description> <search> <query>| makeresults | add info </query> <earliest>$time_token1.earliest$</earliest> <latest>$time_token1.latest$</latest> <done> <eval token="tokEarliestTimeString">strftime($result.info_min_time$,"%Y/%m/%d %H:%M:%S %p")</eval> <eval token="tokLatestTimeString">if($result.info_max_time$="+Infinity",strftime(relative_time(now(),"0d"),"%Y/%m/%d %H:%M:%S %p"),strftime($result.info_max_time$,"%Y/%m/%d %H:%M:%S %p")</eval> <eval token="tokEarliestTime">$result.info_min_time$</eval> <eval token="tokLatestTime">if($result.info_max_time$="+Infinity",relative_time(now(),"0d"),$result.info_max_time$)</eval> </done> </search> <fieldset submitButton="true"> <input type="dropdown" token="lookupToken" searchWhenChanged="true"> <label>Year Range</label> <choice value="LIMSCSV.csv">2022-2023</choice> <choice value="LIMS2021.csv">2021</choice> <choice value="LIMS2020.csv">2020</choice> <choice value="LIMS2019.csv">2019</choice> <choice value="LIMSPre2019.csv">Pre-2019</choice> <default>LIMSCSV.csv</default> <initialValue>LIMSCSV.csv</initialValue> </input> <input type="time" token="time_token1" searchWhenChanged="true"> <label>Select Time Range within Year Range</label> <default> <earliest>-7d@h</earliest> <latest>now</latest> </default> </input> <input type="multiselect" token="lab_token" searchWhenChanged="true"> <label>Result Status</label> <delimiter> OR </delimiter> <search> <query/> <earliest>-24h@h</earliest> <latest>now</latest> </search> <choice value="OffsiteComplete,OffsiteNeedsReview">Offsite Analysis</choice> <choice value="QA Approved">Onsite Analysis</choice> <valuePrefix>ResultStatus="</valuePrefix> <valueSuffix>"</valueSuffix> </input> <input type="multiselect" token="analyte_token" searchWhenChanged="true"> <label>Select Analyte</label> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>Analyte="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <fieldForLabel>Analyte</fieldForLabel> <fieldForValue>Analyte</fieldForValue> <search> <query>|inputlookup $lookupToken$ |where _time &lt;= $tokLatestTime$ |where _time &gt;= $tokEarliestTime$ |where $lab_token$ |stats count by Analyte</query> </search>  
Hello, which scanning tool would you recommend to report on Splunk add-on vulnerabilities?
Hello, Can someone please help me with the solutions for the below errors on splunk internal logs? Host                     Component                    Error splunk01-shc SearchScheduler The m... See more...
Hello, Can someone please help me with the solutions for the below errors on splunk internal logs? Host                     Component                    Error splunk01-shc SearchScheduler The maximum disk usage quota for this user has been reached. splunk01-shc DispatchManager The maximum number of concurrent historical searches on this instance has been reached. splunk01-shc UserManagerPro SAML config is invalid, Reconfigure it. splunk02 ExecProcessor message from "{}" /bin/sh: {}: Permission denied splunk02 ExecProcessor message from "{}" File "/opt/splunk/etc/apps/splunk_assist/bin/instance_id_modular_input.py", line 29, in should_run splunk01-shc SearchScheduler Search not executed: The maximum number of concurrent historical searches on this instance has been reached., concurrency_category={}, concurrency_context={}, current_concurrency={} concurrency_limit={}
I am regularly running queries to ingest DB data from Heavy forwarders, but is there a way to run ad-hoc DBX searches from the SplunkCloud search head?  It doesn't seem to either recognize the connec... See more...
I am regularly running queries to ingest DB data from Heavy forwarders, but is there a way to run ad-hoc DBX searches from the SplunkCloud search head?  It doesn't seem to either recognize the connection or it returns no results when I attempt with a connection that exists in the Heavy forwarder
I am using the nix TA to report on Unix and Linux server health. I'm trying to learn how things work by using the "Monitoring Unix and Linux" content pack and looking at how KPIs and the itsi_summary... See more...
I am using the nix TA to report on Unix and Linux server health. I'm trying to learn how things work by using the "Monitoring Unix and Linux" content pack and looking at how KPIs and the itsi_summary_metrics work together. I am analyzing the NIX:OS:Performance.NIX-df base search and see that it is using a "metrics search" and can't find what field that base search is looking for in my data to generate any of the metrics - for example "Free MB /". When I look at my events index (in my case the index is "os"), I have the sourcetype of df but it does not have a "Free MB /" field. Is there a saved search generating the field that the base search will be using for that metric? I looked in saved searches, Fields, All configurations, but can't find anything. Perhaps I'm looking for the wrong thing? Am I thinking about this all wrong? I am new to ITSI and am going to take the ITSI course soon.
hello team, I have data from CSV files coming into my Splunk instance, I can search and find that data. However, they come together in the "Event" field, and I would like to separate them based o... See more...
hello team, I have data from CSV files coming into my Splunk instance, I can search and find that data. However, they come together in the "Event" field, and I would like to separate them based on a comma to create dashboards for servers that haven't been patched in over 30 days and haven't been restarted in over 30 days. So I use the following search:       index="index_name" host=hostname source="path_to_file/file.csv" sourcetype="my_source"         And I get the results as follows: How I see the event. I'm new to using the tool so I'm a bit overwhelmed by the amount of information, so I'm not sure which way to go. Is it possible to do this just using Splunk Commands? Note: As you can see I have hidden the real information about the servers, IPs and other names for compliance purposes.
I have a sourcetype that is exhibiting very odd behavior.  If I try to run a lookup command such as the following: index=index_here sourcetype=sourcetype_here |lookup lookup_name JoiningID as Joinin... See more...
I have a sourcetype that is exhibiting very odd behavior.  If I try to run a lookup command such as the following: index=index_here sourcetype=sourcetype_here |lookup lookup_name JoiningID as JoiningID output Value1 Value2 It will not give me Value1 or Value2 in my results, however if I instead run: index=index_here sourcetype=sourcetype_here |join type=left JoiningID [|inputlookup lookup_name] I get the Value1 and Value2 here joined in no problem.  What are some reasons for the actual lookup command not giving me any values?
I am looking to sum up cumulative column totals by hour in a separate column. Here is the search: index=main CompletedEvent | bin _time span=1h | stats dc(clientid) as HourlyClients by _time I wou... See more...
I am looking to sum up cumulative column totals by hour in a separate column. Here is the search: index=main CompletedEvent | bin _time span=1h | stats dc(clientid) as HourlyClients by _time I would like there to be a 2nd result column that accumulates the 1st column by hour.  For example:  the result in row 2 of the 2nd column will be the sum of rows 1 and 2 in the 1st column, the result of row 3 of the 2nd column will be the sum of rows 1 to 3 in the 1st column, etc. Thanks in advance.
Looks like the issue is the buckets have not rolled yet is there any reasons we should not roll the bucket to fix the issues? just new to the clustering and would appreciate any comments. Also shows ... See more...
Looks like the issue is the buckets have not rolled yet is there any reasons we should not roll the bucket to fix the issues? just new to the clustering and would appreciate any comments. Also shows excessive buckets issue I think this is because of the data creating to many small buckets. So in General what would be a good plan to investigate and what needs to be done to resync the data for clustering        
How to Inspect each feed by different criteria: Average ingestion rate per day, Minimum event size, 24 hour period Average event size, 24 hour period, Maximum event size, 24 hour period, Median e... See more...
How to Inspect each feed by different criteria: Average ingestion rate per day, Minimum event size, 24 hour period Average event size, 24 hour period, Maximum event size, 24 hour period, Median event size, 24 hour period Standard Deviation of event size, 24 hour period based on sourcetype or source
The query below is showing some details about ad-hoc searches. The “info” field in index=_audit has 4 possible values: completed, granted, canceled, failed. I assume ‘granted’ means that the user ... See more...
The query below is showing some details about ad-hoc searches. The “info” field in index=_audit has 4 possible values: completed, granted, canceled, failed. I assume ‘granted’ means that the user was granted access to run the query, i.e., has the permission to access the indexes, run this type of query etc. I assume ‘canceled’ means that the query was cancelled. How do we know if it was cancelled by the user (user pressed the Stop button or Pause), or if it timed out by the system because it took too long to run ? I found field fully_completed_search, but it is always true. What would be some of the reasons to get a ‘failed’ status ?
Hi , I am looking for troubleshooting steps for Data Ingestion Issue through Heavy Forwarder
I am using the CrowdStrike App in my playbook and trying to run the detonate file action. One of the required parameters is Vault ID, which supposedly the Vault ID of the file. I am not quite sure wh... See more...
I am using the CrowdStrike App in my playbook and trying to run the detonate file action. One of the required parameters is Vault ID, which supposedly the Vault ID of the file. I am not quite sure what the vault id means. 
I have two lookup table call name.csv and id.csv. both has matching field call fullname. id.csv file has id field but name.csv doesnot. but I want to add id field to name.csv. Is anyone know how to ... See more...
I have two lookup table call name.csv and id.csv. both has matching field call fullname. id.csv file has id field but name.csv doesnot. but I want to add id field to name.csv. Is anyone know how to add id field  name.csv file. I try | inputlookup name.csv | lookup id.csv fullname output id  but it didnot work.
Hello , I was testing the json extraction for a json data and its missing to index the last loop (Which is highligthed in Bold) starting from the RuntimeTasks.   Please help with the props.conf { ... See more...
Hello , I was testing the json extraction for a json data and its missing to index the last loop (Which is highligthed in Bold) starting from the RuntimeTasks.   Please help with the props.conf {     "Variant": "Tidesa",     "File": "hello.hex",      "Testgroup": "Accent",     "Results": [                             {                                   "TestModule": "CustComm_CustIf",                                   "Name": "Customer Output",                                   "TestCases": [                                                                  {                                                                       "ID": 46563,                                                                       "Title": "asfs_Signals",                                                                        "Tested": 1,                                                                         "Result": 1                                                                   },                                                                  {                                                                       "ID": 46564,                                                                       "Title": "af46564_ValueAverage",                                                                       "Tested": 1,                                                                       "Result": 1                                                                   }                                                        ]                                       }                             ],              "RuntimeTasks": null,             "BUILD_NUMBER": "306",             "JOB_BASE_NAME": "testing_PG",             "JOB_URL": "https://is-jvale.de.ats.com/Acp/job/Mks/job/RB_MS_jfaj_fajja_testing/306",              "NODE_NAME": "ast967" }
Hi, Here is a challenge that works partly as expected. On a HF I need to split syslog data to two different instances, one internal in the company, and secondly to an external company (that does ... See more...
Hi, Here is a challenge that works partly as expected. On a HF I need to split syslog data to two different instances, one internal in the company, and secondly to an external company (that does investigations on these). The external company has provided a sysmon config file, which collect all they need, but which on the other hand provides too much internally. The split of data sending two ways works great using props, transforms and outputs. Now the challenge is to remove some of the events send to the internal tcp group. props.conf   [source::XmlWinEventLog:Microsoft-Windows-Sysmon/Operational] TRANSFORMS-routing = sysmon-routeToExt, sysmon-routeToInt   transforms.conf   [sysmon-routeToInt] SOURCE_KEY = MetaData:Host REGEX = (?i)(.*) DEST_KEY=_TCP_ROUTING FORMAT=TcpOut_Int [sysmon-routeToExt] SOURCE_KEY = MetaData:Host REGEX = (?i)(.*) DEST_KEY=_TCP_ROUTING FORMAT=TcpOut_Ext   One of the basic questions is: Is there a way to use props (order vise) so it will be possible to send everything to one tcp group (external), and with the same events then filter (drop some events using REGEX), and send the remaining events to the second tcp group (internal) ?
Using the Map Rule to Technique, I select a Rule Name, then I add multiple MITRE ATT&CK Techniques. Is there a limit to the number of techniques I can map to? Is this even recommended?
Related to ITSIID-I-326  "ITSI's Episode Review shows several KPI’s such as MTTR, Episodes by Severity, Total Noise Reduction etc. which are made up by all episodes in ITSI. It would be great if th... See more...
Related to ITSIID-I-326  "ITSI's Episode Review shows several KPI’s such as MTTR, Episodes by Severity, Total Noise Reduction etc. which are made up by all episodes in ITSI. It would be great if this view was customisable so that every ITSI user only sees the KPI’s for the episodes that this user is taking care of. For example, if there are several people using ITSI - an analyst would only see MTTR or Episodes by Severity for the episodes that he or she is working on."  The dashboard in Episode Review can be customised to display different visualisation,  tables and search results.  As a proof of concept I created a similar dashbord to the original that is delivered for Episode review, but added search filters to the current logged in user.        { "dataSources": { "mttrSearch": { "options": { "query": "| tstats earliest(_time) as t1 where `itsi_notable_audit_index` activity=\"*resolved*\" [| rest /services/authentication/current-context splunk_server=local \n | eval user = username \n | fields user \n | format] by event_id \n| append \n [| tstats earliest(itsi_first_event_time) as t2 where `itsi_event_management_group_index` by itsi_group_id] \n| eval match_id=coalesce(event_id,itsi_group_id) \n| stats values(*) AS * by match_id \n| search event_id=* itsi_group_id=* \n| eval diff=t1-t2 \n| stats avg(diff) as t3 \n| eval avgDuration = round(t3/60,0) \n| fields - t3", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search", "name": "mttr" }, "episodesBySeveritySearch": { "options": { "query": "| tstats count where index=itsi_grouped_alerts sourcetype=itsi_notable:group NOT source=itsi@internal@group_closing_event NOT itsi_dummy_closing_flag=* NOT itsi_bdt_event=* by itsi_group_id \n| lookup itsi_notable_group_user_lookup _key AS itsi_group_id OUTPUT owner severity status instruction \n| search [| rest /services/authentication/current-context splunk_server=local \n | eval owner = username \n | fields owner \n | format]\n| lookup itsi_notable_group_system_lookup _key AS itsi_group_id OUTPUT title description start_time last_time is_active event_count \n| stats count as \"Count\" by severity \n| sort - severity \n| eval severity=case(severity=1,\"Information\",severity=2,\"Normal\",severity=3,\"Low\",severity=4,\"Medium\",severity=5,\"High\",severity=6,\"Critical\") \n| rename severity as \"Severity\"", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search", "name": "ebs" }, "noiseReductionSearch": { "options": { "query": "| rest /services/authentication/current-context splunk_server=local \n | eval user = username \n | table user, roles", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search", "name": "CU" }, "percentAckSearch": { "options": { "query": "| tstats count as Acknowledged where index=itsi_notable_audit activity=*acknowledged* \n [| rest /services/authentication/current-context splunk_server=local \n | eval user = username \n | fields user \n | format] \n| appendcols \n [| tstats dc(itsi_group_id) as total where index=itsi_grouped_alerts sourcetype=itsi_notable:group NOT source=itsi@internal@group_closing_event NOT itsi_dummy_closing_flag=* NOT itsi_bdt_event=*] \n| eval acknowledgedPercent=(Acknowledged/total)*100 \n| table acknowledgedPercent", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search", "name": "EACK" }, "mttaSearch": { "options": { "query": "| tstats earliest(_time) as t1 where index=itsi_notable_audit activity=\"*acknowledged*\" \n [| rest /services/authentication/current-context splunk_server=local \n | eval user = username \n | fields user \n | format] by event_id \n| append \n [| tstats earliest(itsi_first_event_time) as t2 where index=itsi_grouped_alerts sourcetype=itsi_notable:group NOT source=itsi@internal@group_closing_event NOT itsi_dummy_closing_flag=* NOT itsi_bdt_event=* by itsi_group_id] \n| eval match_id=coalesce(event_id,itsi_group_id) \n| stats values(*) AS * by match_id \n| search event_id=* itsi_group_id=* \n| eval diff=t1-t2 \n| stats avg(diff) as t3 \n| eval avgDuration = round(t3/60,0) \n| fields - t3", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search", "name": "MTTA" } }, "visualizations": { "mttr": { "title": "Mean Time to Resolve for Current User", "type": "splunk.singlevalue", "options": { "backgroundColor": "#ffffff", "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0, "unit": "minutes" }, "dataSources": { "primary": "mttrSearch" } }, "episodesBySeverity": { "title": "Episodes by Severity for Current User", "type": "splunk.bar", "options": { "backgroundColor": "#ffffff", "barSpacing": 5, "dataValuesDisplay": "all", "legendDisplay": "off", "showYMajorGridLines": false, "yAxisLabelVisibility": "hide", "xAxisMajorTickVisibility": "hide", "yAxisMajorTickVisibility": "hide", "xAxisTitleVisibility": "hide", "yAxisTitleVisibility": "hide" }, "dataSources": { "primary": "episodesBySeveritySearch" } }, "noiseReduction": { "title": "Current User", "type": "splunk.table", "context": { "backgroundColorThresholds": [ { "from": 95, "value": "#65a637" }, { "from": 90, "to": 95, "value": "#6db7c6" }, { "from": 87, "to": 90, "value": "#f7bc38" }, { "from": 85, "to": 87, "value": "#f58f39" }, { "to": 85, "value": "#d93f3c" } ] }, "dataSources": { "primary": "noiseReductionSearch" }, "showProgressBar": false, "showLastUpdated": false }, "percentAck": { "title": "Episodes Acknowledged for Current User", "type": "splunk.singlevalue", "options": { "backgroundColor": "#ffffff", "numberPrecision": 2, "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0, "unit": "%" }, "dataSources": { "primary": "percentAckSearch" } }, "mtta": { "title": "Mean Time to Acknowledged for Current User", "type": "splunk.singlevalue", "options": { "backgroundColor": "#ffffff", "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0, "unit": "minutes" }, "dataSources": { "primary": "mttaSearch" } } }, "layout": { "type": "grid", "options": { "display": "auto-scale", "height": 240, "width": 1440 }, "structure": [ { "item": "mttr", "type": "block", "position": { "x": 0, "y": 0, "w": 288, "h": 220 } }, { "item": "episodesBySeverity", "type": "block", "position": { "x": 288, "y": 0, "w": 288, "h": 220 } }, { "item": "noiseReduction", "type": "block", "position": { "x": 576, "y": 0, "w": 288, "h": 220 } }, { "item": "percentAck", "type": "block", "position": { "x": 864, "y": 0, "w": 288, "h": 220 } }, { "item": "mtta", "type": "block", "position": { "x": 1152, "y": 0, "w": 288, "h": 220 } } ] } }                
How can we limit EUM license usage. We have 100,000 Mobile App user/devices but we need to monitor 10% Mobile users (10000 ). ie 10,000 devices (Active Agents) per month as old licensing model. Cur... See more...
How can we limit EUM license usage. We have 100,000 Mobile App user/devices but we need to monitor 10% Mobile users (10000 ). ie 10,000 devices (Active Agents) per month as old licensing model. Current license model is Real User Monitoring (on-prem) - Peak Edition (EUM Unified License)
Link to post: (Issue with Management activity Logs) by Abdulkareem https://community.splunk.com/t5/All-Apps-and-Add-ons/Issue-with-Management-activity-Logs/m-p/654348#M79599 Hi all, I ... See more...
Link to post: (Issue with Management activity Logs) by Abdulkareem https://community.splunk.com/t5/All-Apps-and-Add-ons/Issue-with-Management-activity-Logs/m-p/654348#M79599 Hi all, I have successfully integrated Office 365 with Splunk and am currently receiving logs from various sources, including message trace, defender, among others. However, I have noticed an absence of management activity logs in the data. Upon further investigation, I encountered an error message located at $splunkpath/var/log/splunk/splunk_ta_o365_management_activity_*.log. I would greatly appreciate any assistance or insights into resolving this issue.2023-08-15 12:57:44,362 level=INFO pid=3567796 tid=MainThread logger=splunksdc.collector pos=collector.py:run:267 | | message="Modular input started." 2023-08-15 12:57:44,384 level=INFO pid=3567796 tid=MainThread logger=splunk_ta_o365.common.settings pos=settings.py:load:36 | datainput=b'test' start_time=1692093464 | message="Load proxy settings success." enabled=False host=b'' port=b'' username=b'' 2023-08-15 12:57:45,011 level=INFO pid=3567796 tid=MainThread logger=splunk_ta_o365.common.portal pos=portal.py:get_v2_token_by_psk:211 | datainput=b'test' start_time=1692093464 | message="Acquire access token success." expires_on=1692097064.011808 2023-08-15 12:57:45,715 level=ERROR pid=3567796 tid=MainThread logger=splunk_ta_o365.modinputs.management_activity pos=utils.py:wrapper:72 | datainput=b'test' start_time=1692093464 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_ta_o365/lib/splunksdc/utils.py", line 70, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 201, in run executor.run(adapter) File "/opt/splunk/etc/apps/splunk_ta_o365/lib/splunksdc/batch.py", line 54, in run for jobs in delegate.discover(): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 690, in discover session = self._get_session() File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 361, in _get_session self._enable_subscription(session) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 372, in _enable_subscription self._subscription.start(session) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 386, in start response = self._perform(session, "POST", "/subscriptions/start", params) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 419, in _perform return self._request(session, method, url, kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 453, in _request raise O365PortalError(response) splunk_ta_o365.common.portal.O365PortalError: 400:{"error":{"code":"StartSubscription [CorrId=23540c54-3a29-4142-af13-0ce9a78eb47c][TenantId=*******,ContentType=Audit.AzureActiveDirectory,ApplicationId=*******,PublisherId=*******][AppId","message":"9b7ccc6-ce29-48bf-9dec-12384684ee5c] failed. Exception: Microsoft.Office.Compliance.Audit.DataServiceException: Tenant ******* does not exist.\r\n at Microsoft.Office.Compliance.Audit.API.AzureManager.<GetSubscriptionTableClientForTenantAsync>d__52.MoveNext() in d:\\dbs\\sh\\nibr\\0811_070645\\cmd\\e\\sources\\dev\\auditing\\src\\auditapiservice\\common\\AzureManager.cs:line 2113\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at Microsoft.Office.Compliance.Audit.API.AzureManager.<GetAPISubscriptionAsync>d__22.MoveNext() in d:\\dbs\\sh\\nibr\\0811_070645\\cmd\\e\\sources\\dev\\auditing\\src\\auditapiservice\\common\\AzureManager.cs:line 549\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at Microsoft.Office.Compliance.Audit.API.StartController.<StartSubscription>d__0.MoveNext() in d:\\dbs\\sh\\nibr\\0811_070645\\cmd\\19\\sources\\dev\\auditing\\src\\auditapiservice\\apifrontendservicerole\\Controllers\\StartController.cs:line 76"}} 2023-08-15 12:57:45,719 level=INFO pid=3567796 tid=MainThread logger=splunksdc.collector pos=collector.py:run:270 | | message="Modular input exited."   This message has 0 replies