All Topics

Top

All Topics

I have a data like this. {     env: prod    host: prod01    name: appName    info: {       data: [ ...      ]      indicators: [         {           details: {              A.runTime: 434 ... See more...
I have a data like this. {     env: prod    host: prod01    name: appName    info: {       data: [ ...      ]      indicators: [         {           details: {              A.runTime: 434            A.Count: 0            B.runTime: 0            B.Count: 0            ....                     }          name: timeCountIndicator          status: UP        }        {           details: {             A.downCount: 2            A.nullCount: 0            B.downCount: 0            B.nullCount: 0            ....                   }          name: downCountIndicator          status: UP        }      ]      status: DOWN    }    metrics: { ...    }    ping: 1 } I only want to extract fields in info.indicators{}.details ONLY when info.indicators{}.name of that field is "timeCountIndicator". I tried to use spath combined with table, mvexpand and where ... | spath path=info.indicators{} output=indicators | table indicators |mvexpand indicators| where match(indicators,"timeCountIndicator") It returns a record as a string, however. And it's really hard to convert string back to fields which is hard to process. (Technically extract/rex can deal with it, but it takes a REALLY long time to extract every fields in details when I need only some fields) Is there any way to deal with this in the easier way?
Hello - I have several dashboards that are presenting the user with a pop up box    Reviewing the Browser Console, I see the following: The culprit seems to be common.js Th... See more...
Hello - I have several dashboards that are presenting the user with a pop up box    Reviewing the Browser Console, I see the following: The culprit seems to be common.js The dashboard is already using the version=1.1, that I have seen in other posts.  The dashboard doesn't reference any .js scripts nor does it use any lookups to generate results. <form version="1.1" hideEdit="false"> Any suggestions are appreciated.  Thank you. However, this issue persists.  
Analyze Transaction Scores to understand the impact of increased user activity   Video Length: 2 min 43 seconds    CONTENTS | Introduction | Video |Resources | About the presenter   An in... See more...
Analyze Transaction Scores to understand the impact of increased user activity   Video Length: 2 min 43 seconds    CONTENTS | Introduction | Video |Resources | About the presenter   An increase in user activity can create a larger impact of degraded performance, should the systems not be fully tuned properly. A small problem could easily lead to an exponential one if not addressed quickly.   The AppDynamics Transaction Scorecard helps you focus on any issue that grows as user access grows by providing a simple yet effective indication of how transactions perform according to one of five categories: normal, slow, very slow, stalled, or those that have errors.   The scorecard directs you to the snapshots that provide the details necessary to understand where the largest surface area issue is, which helps fix things quickly.   Additional Resources  Learn more about trace analysis in the documentation.   Monitor the performance of business transactions  Troubleshoot business transactions performance with transaction snapshots  About presenter Douglas Lindee Douglas Lindee joined Cisco AppDynamics as a Field Architect in late 2021, having a 20+ year career behind him in systems, application, and network monitoring, event management, reporting, and automation — most previously on an extended engagement focusing on AppDynamics. With this broad view of monitoring solutions and technology, he serves as a point of technical escalation, assisting sales teams to overcome technical challenges during the sales process.
Hello Splunkers, I am New to Splunk and am trying to figure out how to parse nested JSON data spit out by an end-of-line test.  Here is a sample event: {"serial_number": "PLACEHOLDER1234", "type":... See more...
Hello Splunkers, I am New to Splunk and am trying to figure out how to parse nested JSON data spit out by an end-of-line test.  Here is a sample event: {"serial_number": "PLACEHOLDER1234", "type": "Test", "result": "Pass", "logs": [{"test_name": "UGC Connect", "result": "Pass"}, {"test_name": "Disable UGC USB Comm Watchdog", "result": "Pass"}, {"test_name": "Hardware Rev", "result": "Pass", "received": "4"}, {"test_name": "Firmware Rev", "result": "Pass", "received": "1.8.3.99", "expected": "1.8.3.99"}, {"test_name": "Set Serial Number", "result": "Pass", "received": "1 A S \n", "expected": "1 A S"}, {"test_name": "Verify serial number", "result": "Pass", "received": "JC0024EW1482300425", "expected": "JC0024EW1482300425", "reason": "Truncated full serial number: 30913JC0024EW1482300425 to JC0024EW1482300425"}, {"test_name": "Thermocouple", "pt1_ugc": "24969.0", "pt1": "25000", "pt2_ugc": "19954.333333333332", "pt2": "20000", "pt3_ugc": "14993.666666666666", "pt3": "15000", "result": "Pass", "tolerance": "1000 deci-mV"}, {"test_name": "Cold Junction", "result": "Pass", "ugc_cj": "278", "user_temp": "270", "tolerance": "+ or - 5 C"}, {"test_name": "Glow Plug Open and Short", "result": "Pass", "received": "GP Open, Short, and Load verified OK.", "expected": "GP Open, Short, and Load verified OK."}, {"test_name": "Glow Plug Power On", "result": "Pass", "received": "User validated Glow Plug Power"}, {"test_name": "Glow Plug Measure", "pt1_ugc": "848", "pt1": "2070", "pt1_tolerance": "2070", "pt2_ugc": "5201", "pt2": "5450", "pt2_tolerance": "2800", "result": "Pass"}, {"test_name": "Motor Soft Start", "result": "Pass", "received": "Motor Soft Start verified", "expected": "Motor Soft Start verified by operator"}, {"test_name": "Motor", "R_rpm_ugc": 1525.0, "R_rpm": 1475, "R_v_ugc": 160.0, "R_v": 155, "R_rpm_t": 150, "R_v_t": 160, "R_name": "AUGER 320 R", "F_rpm_ugc": 1533.3333333333333, "F_rpm": 1475, "F_v_ugc": 164.0, "F_v": 182, "F_rpm_t": 150, "F_v_t": 160, "F_name": "AUGER 320 F", "result": "Pass"}, {"test_name": "Fan", "ugc_rpm": 2436.0, "rpm": 2130, "rpm_t": 400, "ugc_v": 653.3333333333334, "v": 630, "v_t": 160, "result": "Pass"}, {"test_name": "RS 485", "result": "Pass", "received": "All devices detected", "expected": "Devices detected: ['P']"}, {"test_name": "Close UGC Port", "result": "Pass"}, {"test_name": "DFU Test", "result": "Pass", "received": "Found DFU device"}, {"test_name": "Power Cycle", "result": "Pass", "received": "User confirmed power cycle"}, {"test_name": "UGC Connect", "result": "Pass"}, {"test_name": "Close UGC Port", "result": "Pass"}, {"test_name": "USB Power", "result": "Pass", "received": "USB Power manually verified"}]}  I want to be able to extract the test data (all key-value pairs) from each test. Ideally would like to create dashboard charts showing response from Motor and Fan tests among others.   Here is a sample search i have been using which allows me to create a table with the serial number, overall test result, individual test name, and individual test result index="factory_mtp_events" | search sourcetype="placeholder" source="placeholder" serial_number="PLACEHOLDER*"| spath logs{} output=logs| stats count by serial_number result logs| eval _raw=logs| spath test_name output=test_name |spath result output=test_result| table serial_number result test_name test_result How Can I index into the logs{} section and pull out all results dependent on test_name? So, how can I query for logs{}.test_name="Motor" and have the result yield : {"test_name": "Motor", "R_rpm_ugc": 1525.0, "R_rpm": 1475, "R_v_ugc": 160.0, "R_v": 155, "R_rpm_t": 150, "R_v_t": 160, "R_name": "AUGER 320 R", "F_rpm_ugc": 1533.3333333333333, "F_rpm": 1475, "F_v_ugc": 164.0, "F_v": 182, "F_rpm_t": 150, "F_v_t": 160, "F_name": "AUGER 320 F", "result": "Pass"},  
I have this search query and working fine. index="dynatrace" sourcetype="dynatrace:usersession" | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user... See more...
I have this search query and working fine. index="dynatrace" sourcetype="dynatrace:usersession" | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="test" | spath output=pp_user_action_name input=user_actions path=name | where pp_user_action_name in ("test.aspx") | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | eval pp_user_action_name=substr(pp_user_action_name,0,150) | eventstats avg(pp_user_action_response) AS "Avg_today" by pp_user_action_name | stats count(pp_user_action_response) As "Today_Calls",perc90(pp_user_action_response) AS "Perc90_today" by pp_user_action_name Avg_today | eval Perc90_today=round(Perc90_today/1000,2)| eval Avg_today=round(Avg_today/1000,2) | table pp_user_action_name,Today_Calls,Avg_today,Perc90_today PFA screenshot for the results. Now i am trying to pass the pp_user_action_name value from the test.csv file and not getting any results  index="dynatrace" sourcetype="dynatrace:usersession" | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="test" | spath output=pp_user_action_name input=user_actions path=name | where pp_user_action_name in ([| inputlookup test.csv]) | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | eval pp_user_action_name=substr(pp_user_action_name,0,150) | eventstats avg(pp_user_action_response) AS "Avg_today" by pp_user_action_name | stats count(pp_user_action_response) As "Today_Calls",perc90(pp_user_action_response) AS "Perc90_today" by pp_user_action_name Avg_today | eval Perc90_today=round(Perc90_today/1000,2)| eval Avg_today=round(Avg_today/1000,2) | table pp_user_action_name,Today_Calls,Avg_today,Perc90_today How to fix this?  thanks in advance. 
  I have configured the APP for microsoft 365 which was working properly but it stopped working and after checking it was found that one of the keys or certificates had expired. I contacted the adm... See more...
  I have configured the APP for microsoft 365 which was working properly but it stopped working and after checking it was found that one of the keys or certificates had expired. I contacted the administrator asking him for the "Client Secret" and he gave me the information but he also asks for the "Cloud App Security Token" field and I really have no idea what information I should ask the administrator for. I would be grateful if you could explain me if it is possible. Thanks  
Hi, Could someone assist me in setting the threshold for this correlation search in ES? It's generating an excessive number of notables over the last 7 days, roughly around 30k. How can we reduce th... See more...
Hi, Could someone assist me in setting the threshold for this correlation search in ES? It's generating an excessive number of notables over the last 7 days, roughly around 30k. How can we reduce the number of notables? Additionally, I've provided the bytes_out data for the last 24 hrs. Please set the threshold based on that data. | tstats `summariesonly` count values(sourcetype) AS sourcetype, values(All_Traffic.src_zone) AS src_zone, earliest(_time) as earliest, latest(_time) as latest, values(All_Traffic.action) AS action. values(All_Traffic.bytes_out) AS bytes_out, values(All_Traffic.bytes_in) AS bytes_in, sum(All_Traffic.bytes) AS bytes, values(All_Traffic.direction) AS direction, values(All_Traffic.app) AS app, from datamodel=Network_Traffic ("bytes_out" 163 594 594 594 594 294 686 215 392 392 98 954 215 86 424 900 530 594 594 117 294 882 148 258 320 594 516 142 215 159 215 86 98 98 369 401 159 215 215 594 212 215 220 585 203 594 680 212 159 159 159 159 159 718 159 159 159 159 594 221 146 318 318 159 159 318 318 318 318 159 159 159 159 159 159 636 318 159 159 159 159 159 159 159 159 159 159 159 159 159 159 318 159 318 318 318 318 326 159 159 753 159 326 657 912 159 318 159 159 159 159 159 318 148 148 814 594 320 159 159 159 159 159 159 159 159 159 318 318 159 795 318 318 159 159 565 870 159 321 912 318 318 508 159 159 567 487 159 836 507 159 159 318 477 318 318 159 159 318 318 318 477 246 155 594 594 594 594 594 594 99 159 159 222 241 159 438 565 400 159 159 159 318 795 148 119 667 159 479 486 477 477 406 828 477 222 222 148 753 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 594 784 323 594 318 159 388 318 318 711 318 388 159 159 159 159 350 350 318 318 560 318 318 719 318 646 620 159 801 159 620 159 779 318 912 318 318 318 318 318 318 323 641 810 318 318 318 323 620 318 620 318 870 159 159 159 620 461 318 318 779 318 870 159 870 323 388 318 318 870 318 350 832 318 159 318 318 810 318 159 318 318 318 318 318 733 318 323 323 323 651 159 159 318 318 318 318 318 318 159 159 159 159 159 159 159 159 159 159 159 159 318 318 318 318 159 159 159 159 159 159 159 159 318 159 159 159 159 159 159 159 318 159 319 318 318 665 935 356 574 197 197 201 159 477 477 963 477 486 159 318 159 594 155 824 400 350 318 477 222 159 222 296 518 666 318 477 171 318 318 159 159 159 159 155 318 318 318 318 477 159 159 159 159 318 318 159 318 159 159 318 722 318 318 439 549 328 477 159 318 964 603 318 318 159 159 196 370 148 753 159 159 569 159 765 477 594 370 370 318 318 636 318 466 587 428 444 159 148 148 159 159 159 159 159 159 159 159 159 159 159 159 159 753 594 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 159 477 159 758 326 979 159 159 318 318 318 318 318 594 318 318 159 318 159 318 159 159 159 159 159 159 159 159 318 318 318 318 159 159 636 159 159 679 159 753 667 318 318 318 159 159 159 159 753 331 331 318 159 649 159 353 353 159 159 512 159 326 955 159 753 159 326 326 159 159 912 753 159 159 594 325 325 318 318 912 159 318 159 318 326 159 159 753 159 326 924 318 943 159 665 159 594 594 400 159 159 159 159 159 159 159 159 159 159 159 159 159 159 908 222 439 525 318 159 603 159 159 148 222 318 318 728 318 318 159 159 159 159 155 155)
  I have a multivalue field, which I would like to expand to individual fields, like so: | makeresults count=1 | eval a=mvappend("1","7") | eval a_0=mvindex(a,0,0) | eval a_1=mvindex(a,1,1) Howev... See more...
  I have a multivalue field, which I would like to expand to individual fields, like so: | makeresults count=1 | eval a=mvappend("1","7") | eval a_0=mvindex(a,0,0) | eval a_1=mvindex(a,1,1) However, the length might be >2 and I would like to have a generic solution to do this. I know I can create a MV field with an index and use mvexpand and then stats to get all back into a single event, but I run into memory issues with this in my own data.    In short: not use mvexpand and solve the issue in a generic fashion.      
Hi. We noticed a few of our RHEL8 servers with splunkforwarder installed logs the line (pasted below) up to thousands of times, causing splunkd.log files to grow excessively and fill the /opt direct... See more...
Hi. We noticed a few of our RHEL8 servers with splunkforwarder installed logs the line (pasted below) up to thousands of times, causing splunkd.log files to grow excessively and fill the /opt directory. Sometimes it occurs every few seconds, while other times it will log hundreds of times per second. So far there are only a handful of servers experiencing the problem, and we have many others on running the same version and OS. 09-17-2023 20:33:50.029 +0000 ERROR BTreeCP [2386469 TcpOutEloop] - failed: failed to mkdir /opt/splunkforwarder/var/lib/splunk/fishbucket/splunk_private_db/snapshot.tmp: File exists Doing a restart of the splunkforwarder service mitigates the problem temporarily, but the error occurs again within a few days. When the error messages come in, the directory already exists and contains files: # ls /opt/splunkforwarder/var/lib/splunk/fishbucket/splunk_private_db/snapshot.tmp/ btree_index.dat btree_records.dat We are not sure what causes the issue or how to reproduce it.
Hello! I have a Splunk Enterprise 9.0.7 deployment.  I have a local user with the "power" role.  When connecting to the Search & Reporting app I can only see the Search option: Isn't "power" ro... See more...
Hello! I have a Splunk Enterprise 9.0.7 deployment.  I have a local user with the "power" role.  When connecting to the Search & Reporting app I can only see the Search option: Isn't "power" role able to access other app features?  Expectation is to see what users with "admin" role see: What have I done wrong? Thank you and best regards, Andrew
Trying to set up an alert to show any log in that has had 500 log on failures in under 30 min.   Here is what I currently have (with non relevant data changed) index=* sourcetype=* action=failu... See more...
Trying to set up an alert to show any log in that has had 500 log on failures in under 30 min.   Here is what I currently have (with non relevant data changed) index=* sourcetype=* action=failure EventCode=4771 OR EventCode=4776 | bucket _time span=30m | stats count by user | where count>500 I want to make sure this is correct.   Thanks!
Hi. I use a lot the metrics.log Indexer side, to debug some bottleneck and/or stress inside the Infrastructure. There is a field, i can't really understand at all,   INFO Metrics - group=tcpin_c... See more...
Hi. I use a lot the metrics.log Indexer side, to debug some bottleneck and/or stress inside the Infrastructure. There is a field, i can't really understand at all,   INFO Metrics - group=tcpin_connections x.x.x.x:50496:9997 connectionType=cookedSSL sourcePort=50496 sourceHost=x.x.x.x sourceIp=x.x.x.x destPort=9997 kb=15.458984375 _tcp_avg_thruput=7.262044477222557 _tcp_Kprocessed=589.84765625 [...]   It's the "tcp_Kprocessed" field, especially related to the field "kb", which is the most important, in my opinion. What is in practice "tcp_Kprocessed", considering that its values are often very inconsistent and not proportionate to the kb? Thanks.
Hi, can anybody help with this task? inputs: "nice_date",sFaultInverter1,sFaultInverter2,sFaultInverter3,sFaultPFC,"sFaultSR-Plaus",sFaultSR,sFaultSpeed "05.12.2023 10:46:53",0,0,1,0,"-1",0,0 "05... See more...
Hi, can anybody help with this task? inputs: "nice_date",sFaultInverter1,sFaultInverter2,sFaultInverter3,sFaultPFC,"sFaultSR-Plaus",sFaultSR,sFaultSpeed "05.12.2023 10:46:53",0,0,1,0,"-1",0,0 "05.12.2023 10:43:27","-1","-1","-1","-1","-1","-1","-1" "05.12.2023 10:41:17",0,320,0,0,"-1",0,0 "05.12.2023 10:30:32",0,0,1,0,"-1",0,0 "05.12.2023 10:28:51",0,0,1,0,"-1",0,0 "05.12.2023 10:28:10","-1","-1","-1","-1","-1","-1","-1" Lookup Attribut,Value,ErrorCode sFaultInverter1,-1,NoCommunication sFaultInverter1,0,noError sFaultInverter1,1,CompressorCurrentSensorFault sFaultInverter1,2,FactorySettings sFaultInverter1,4, sFaultInverter1,8, sFaultInverter1,16,InverterBridgeTemperatureSensorFault sFaultInverter1,32,DLTSensorFault sFaultInverter1,64,ICLFailure sFaultInverter1,128,EEPROMFault sFaultInverter1,256,UpdateProcess sFaultInverter1,512, sFaultInverter1,1024, sFaultInverter1,2048, sFaultInverter1,4096, sFaultInverter1,8129, sFaultInverter1,16384, sFaultInverter1,32768, sFaultInverter2,-1,NoCommunication sFaultInverter2,0,noError sFaultInverter2,1,CommunicationLos sFaultInverter2,2,DcLinkRipple sFaultInverter2,4, sFaultInverter2,8,AcGridOverVtg sFaultInverter2,16,AcGridUnderVtg sFaultInverter2,32,DcLinkOverVtgSW sFaultInverter2,64,DcLinkUnderVtg sFaultInverter2,128,SpeedFault sFaultInverter2,256,AcGridPhaseLostFault sFaultInverter2,512,InverterBridgeOverTemperature sFaultInverter2,1024, sFaultInverter2,2048, I would like to have table with e.G. 3 columns: "nice_date",sFaultInverter1,ErrorCode "05.12.2023 10:46:53",0,noError "05.12.2023 10:43:27","-1",NoCommunication "05.12.2023 10:41:17",0,noError "05.12.2023 10:30:32",0,noError "05.12.2023 10:28:51",0,noError "05.12.2023 10:28:10","-1",NoCommunication for each value of sFaultInverter1 an ErrorCode from the lookUp table. Any help?
Hi, I have Windows Event for specific application that have payload in Windows Event Log, when using Splunk_TA_windows to extract data will get field with multipe "Data". <Data>process_name</Data... See more...
Hi, I have Windows Event for specific application that have payload in Windows Event Log, when using Splunk_TA_windows to extract data will get field with multipe "Data". <Data>process_name</Data><Data>signature_name</Data><Data>binary_description</Data> How can I extract it automatically to fields/value: process_name = process_name signature = signature_name binary = binary_description   Is there any way without using "big" regex? to just $1:$2:$3.. and then add names to $1, $2, $3 like for CSV. something like:  REGEX = (?ms)<Data>(.*?)<\/Data> this will create maybe one multi value field and then assign Field_name
Hi guys,   I started today with Splunk and have one question.   I want to use an or function that if the second "or" the third row is active I got the trigger.   Any ideas how to do it? | ev... See more...
Hi guys,   I started today with Splunk and have one question.   I want to use an or function that if the second "or" the third row is active I got the trigger.   Any ideas how to do it? | eval last_backup_t =strptime(last_backup, "%Y-%m-%d %H:%M:%S.%N%z") | where last_backup_t < relative_time(now(), "-2d@d") | search is_offline= true Thanks
Hi, we are ingesting Couchbase JSON Documents into Splunk Cloud using Kafka.   When I open the same document (1st one ingested in Splunk - _raw and 2nd one is Couchbase JSON) and compare in Visual ... See more...
Hi, we are ingesting Couchbase JSON Documents into Splunk Cloud using Kafka.   When I open the same document (1st one ingested in Splunk - _raw and 2nd one is Couchbase JSON) and compare in Visual Studio Code, I can see differences as shown below: Splunk syntax highlighted data for this record is identical to original Couchbase JSON. Can you please help me understand why _raw is showing this data differently and also is there any way to get _raw data in the same format at original JSON? Thank you.  
Hello, https://docs.appdynamics.com/appd/21.x/latest/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/install-the-cluster-agent/validate-the-cluster-agent-installation 1.Valid... See more...
Hello, https://docs.appdynamics.com/appd/21.x/latest/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/install-the-cluster-agent/validate-the-cluster-agent-installation 1.Validate the Cluster Agent Installation 2.we  have deployed appdynamics using EKS on AWS 3.we have succefully deployed it using helm chart 4.but in the dashboard it says no data available 5.We have installed cluster agent and Visibility infra using the above documentation and we are not able to get the visual data in the console and in the metrics browser it just says no data available. We are using EKS cluster 1.25 version with 2 nodes and we have deployed bank-of-anothos application in our cluster 6.# To install InfraViz   installInfraViz: true # AppDynamics controller info controllerInfo:   url: https://cat202312051119163.saas.appdynamics.com:443   account: My account name   username: My username   password: My password   accessKey: my access key    globalAccount: my account name # Infra Viz config infra Viz:   nodeOS: "linux"   enableMasters: true   stdoutLogging: true   enableContainerHostId: true enable Server viz : true   enable Docker viz : false # Net viz config net Viz:   enabled: true   net Viz Port: 3892 screenshot link: Screenshot 2023-12-13 at 3.07.16 PM.png  Please tell Work around for above issues. Thanks
Hi Team, We received a requirement to monitor the Webservices Utilities: Message monitor in SAP systems. PFA screenshot for reference. Please confirm do we have option in SAP ABAP agent to monitor t... See more...
Hi Team, We received a requirement to monitor the Webservices Utilities: Message monitor in SAP systems. PFA screenshot for reference. Please confirm do we have option in SAP ABAP agent to monitor the below error/log messages. Thanks Selvan
I want to extract only the process name value from the logs and store in a table: Input Log: ------------- <30>1 2023-12-13T06:22:20.197Z 10.205.101.94 4 CGA3001I [sev="INFO" msg="Event" event="Da... See more...
I want to extract only the process name value from the logs and store in a table: Input Log: ------------- <30>1 2023-12-13T06:22:20.197Z 10.205.101.94 4 CGA3001I [sev="INFO" msg="Event" event="Data is getting from process name: C:\\ProgramFiles\\notepad.exe. Now we can try to write the logs. Mode: Operational"] Output: ---------- C:\\ProgramFiles\\notepad.exe I have tried with the command :- regex "(?<=Process name:).*?(?=\.\s+)" | table Process But didn't get any data
I set up the Microsoft Teams Add-On For Splunk yesterday and am successfully ingesting data from our tenant. My query is regarding the relationship between the volume of incoming webhooks from Azure,... See more...
I set up the Microsoft Teams Add-On For Splunk yesterday and am successfully ingesting data from our tenant. My query is regarding the relationship between the volume of incoming webhooks from Azure, and the callrecord events: As I understand it (and this is likely the root cause ), Azure pushes a change notification to the Splunk webhook each time a call ends, containing the unique call ID. The Teams Call Record app/input runs on a schedule (in my case every five minutes) and retrieves all the call records it's received change notifications for since it last ran. I would, therefore, expect there to be an equal number of m365:webhook and m365:teams:callRecord events, but there aren't. I'm typically seeing a 3:2 ratio of webhook to callRecord events.  I believe the 'id' field in the webhook event and the callRecords matches (this is the identifier splunk uses to retrieve the callRecord using graphAPI) and I would have expected the id in each event type to be unique, but there appear to be many duplicates in both event types. If I look at my data for yesterday I can see: 4163 webhook events 3867 callRecord events But if I dedup on 'id', I see: 2614 webhook events 2586 callRecord events ...which still doesn't match (although it's much closer) and is a lot of duplicates. Any bright ideas, folks?