All Topics

Top

All Topics

I am running the following query: index="ABCi" sourcetype=DEF | timechart span=1h count | fields - _time | streamstats current=t diff(count) as count_diff | stats avg(count_diff) BUT, I a... See more...
I am running the following query: index="ABCi" sourcetype=DEF | timechart span=1h count | fields - _time | streamstats current=t diff(count) as count_diff | stats avg(count_diff) BUT, I am receiving the following error: Error in 'streamstats' command: The argument 'diff(count)' is invalid. Can you please help? Thanks
Hi Splunkers, I have a GC log like below:     [716920.165s][info][gc] GC(27612) Concurrent reset 24.051ms [716909.883s][info][gc] GC(27611) Concurrent update references 3124.593ms [716909.885... See more...
Hi Splunkers, I have a GC log like below:     [716920.165s][info][gc] GC(27612) Concurrent reset 24.051ms [716909.883s][info][gc] GC(27611) Concurrent update references 3124.593ms [716909.885s][info][gc] GC(27611) Pause Final Update Refs 1.336ms [716909.885s][info][gc] GC(27611) Concurrent cleanup 79178M->58868M(153600M) 0.143ms [716906.314s][info][gc] GC(27611) Pause Final Mark 2121.376ms [716906.315s][info][gc] GC(27611) Concurrent cleanup 71900M->71709M(153600M) 0.240ms [716906.757s][info][gc] GC(27611) Concurrent evacuation 441.920ms [716906.758s][info][gc] GC(27611) Pause Init Update Refs 0.126ms     I'm trying to get statistic related to total time spend by all these fields (the values in ms at the end of line).  I mean calculated all events in ms and drew a chart or table with total value from last 4 hours. For instance  19.00 - 245000ms 20.00 - 344000ms  21.00 - 345500ms 22.00 - 452000ms  I did manage to extract time needed in ms from all fields, but when I use query like: timechart span=1h sum(eval(Concurrent_reset+Concurrent_Update+ Pause_Final_Mark+Concurrent_cleanup+Concurrant_evacuation+Pause_Init_Update)) as total i just receive results from 19.00-20.00 timespan. What I doing wrong here?    regards, Sz  
Our initial install of the MS Windows AD Objects App was successful and the build searches were successful. However, recently the AD Group rebuild search is failing on the last line item of the searc... See more...
Our initial install of the MS Windows AD Objects App was successful and the build searches were successful. However, recently the AD Group rebuild search is failing on the last line item of the search: | outputlookup AD_Obj_Group While the search seems to complete, it completes with the following error: Error in 'outputlookup' command: Could not write to collection '__AD_Obj_Group_LDAP_list_kv': An error occurred while saving to the KV Store. Look at search.log for more information. In looking at the search job, I am also seeing the following error: ERROR FastTyper [30728 localCollectorThread] - caught exception in eventtyper, search='(index=index1 OR index2 OR index=index3source="*:System" "Installation Failure")' in event typer: err='Comparator '=' has an invalid term on the left hand side: index=index=index3source" For the above error I know where the issue is in the search, but I do not know where this configuration is with-in Splunk Cloud to correct it. Thoughts on how to correct the outputlook and/or the search term? Thanks. Jimmy  
Hi, For field extractions in a clustered environment do you have to use the props.conf method or can you use the field extractor GUI on the search head?   Thanks,   Joe
I've read a few posts here related to this topic but can't find a workable solution.   I have 200+ devices that I want to forecast Write Response Time for each device out 30 days.  My initial query... See more...
I've read a few posts here related to this topic but can't find a workable solution.   I have 200+ devices that I want to forecast Write Response Time for each device out 30 days.  My initial query to gather the data from a metric index is in a lookup table.  So I've tried this based on another similar post but I don't get any data for the predict command:   | inputlookup eg.csv | dedupe device_name | map maxsearches=5 search=" | inputlookup eg.csv | search device=$device_name$ | timechart span=1d avg(WriteRT) as avgWriteRT | predict avgWriteRT future_timespan=30 | eval device=$device_name$" | table _time, WriteRT, "prediction(WriteRT)", device   I suspect it has something to do with 'search device=$device_name$' but unsure what that might be.  Running the inputlookup up to the predict command does return results minus the device_name.  
Is it possible to log user logins and things like creation of new accounts and elevation of privileges for users in Splunk On-Call? We'd like to be able to audit these sorts of changes if possible.
Hello Splunkers I have the following search.The search works fine when running it but when its saved as a panel in a dashboard it complains saying waiting for input  as some of field values  for s... See more...
Hello Splunkers I have the following search.The search works fine when running it but when its saved as a panel in a dashboard it complains saying waiting for input  as some of field values  for state have $ in them("5-drained$") ...is there any other way to change the search to ignore it   index=abc | chart latest(state_sinfo) as state by node | stats count by state | eval {state}=count | fields - count | replace allocated WITH "1-allocated" IN state | replace "allocated*" WITH "1-allocated*" IN state | replace "allocated$" WITH "1-allocated$" IN state | replace "completing" WITH "1-completing" IN state | replace "planned" WITH "1-planned" IN state | replace idle WITH "2-idle" IN state | replace "idle*" WITH "2-idle*" IN state | replace maint WITH "3-maint" IN state | replace reserved WITH "4-reserved" IN state | replace down WITH "5-down" IN state | replace "down*" WITH "5-down*" IN state | replace "down$" WITH "5-down$" IN state | replace "drained*" WITH "5-drained*" IN state | replace "drained$" WITH "5-drained$" IN state | replace "drained" WITH "5-drained" IN state | replace "draining" WITH "5-draining" IN state | replace "draining@" WITH "5-draining@" IN state | replace "reboot" WITH "5-reboot" IN state | replace "reboot^" WITH "5-reboot^" IN state | sort +state    Thanks in Advance
Understand where in their journey your users abandon your application. It could be due to latency or other similar types of performance issues. Or perhaps it happens when trying to communicate with a... See more...
Understand where in their journey your users abandon your application. It could be due to latency or other similar types of performance issues. Or perhaps it happens when trying to communicate with a 3rd party endpoint, such as a payment processing service. Learn how observing key metrics provides the insights you need to ensure your customers have the best-of-class experience.  Read more about AppDynamics Business iQ!  Read up on our documentation around Business Journeys. Learn how to create a conversion analysis widget. 
The problem: My search head is populating with an audit lookup error after upgrading from 9.0.0 to 9.0.2.  What I've found: Looking into windows cert mmc on my Splunk server I saw two certs. Th... See more...
The problem: My search head is populating with an audit lookup error after upgrading from 9.0.0 to 9.0.2.  What I've found: Looking into windows cert mmc on my Splunk server I saw two certs. The self-signed root CA from Splunk, and a cert named SplunkServerDefaultCert below it that is expired. I'm assuming this expired cert is causing the issue and not the actual upgrade itself. Next, I checked my KVStore status, it's reading "failed."  Then I checked web.conf, enableSplunkWebSSL = true, there's a password populated in sslPassword, then I ensured privateKeyPath/serverCert/sslRootCAPath had the files in each location as well as checked the expiration dates for each one. The PEM for serverCert is indeed expired.  What I've done so far: I renamed the server.pem file to server.pem.back, restarted Splunk and hoped a new cert generated. Didn't work. All that did was prevent the web interface from working.  Then I went into openssl.conf and inserted "extendedKeyUsage = serverAuth, clientAuth" in the [v3_req] settings and uncommented "req_extensions = v3_req"  in [req].  I moved on to openssl to generate a new server cert. Created and signed the new server CSR, verified it, and replaced the  old  server cert w/ the new server PEM. Still didn't work.  Found $SPLUNK_HOME/var/lib/splunk/kvstore/mongo/splunk.key, renamed it, restarted splunk, found that a new key was generated, and my KVstore status still reads as "failed."  Going forward: Not sure what else I can do to fix this. Given I backed up everything, I restored it all back to square one w/all the OG certs and keys except the openssl.cnf, I left the changes I made stated earlier.  This is my first time working w/certs, I'm not too savvy w/ any of it, but a lot of the things I did above have all come from other asked questions on this community.  I think one place I may have made a mistake was signing the server.csr I created. I signed it with the new private.key that was created along with it, not the key that is currently annotated in web.conf. I don't know if that makes a difference, but I can't think of any other reason why the new server.pem  didn't work.  For reference: Jeremy describes my exact issue in the below post; however, I do not have the password to the OG splunk cert in the mmc, so I cannot recreate it as he did.  Windows upgrade from 8.1.1 to 9.0: Why does it fai... - Splunk Community Additionally, the above case, is the exact issue I am having down to the error codes.
Hi, I have an index= random_index which contains JSON data of a URL HTTP status code like {'availability':200,application:'random_name'}. the above index gets  input  from an RPA bot through sent... See more...
Hi, I have an index= random_index which contains JSON data of a URL HTTP status code like {'availability':200,application:'random_name'}. the above index gets  input  from an RPA bot through sent to the splunk http event collector endpoint every hour. example search query  index=random_index earliest=-24h latest=now |  search availability=200 | lookup Application_details.csv application OUTPUT Service,ServiceOffering,AssignmentGroup,Priority | stats count as avaibility_count | eval availability_percentage=  (avaibility_count/24)*100 | search availability_percentage < 95 | table availability_percentage,Service, ServiceOffering,AssignmentGroup,Priority | appendpipe[  | stats count  | where count=0 | appendcols [| eval availability_percentage=0,Service=random_service,AssignmentGroup=random_group etc | table availability_percentage, Service,ServiceOffering, AssignmentGroup,Priority ]] | dedup availability_percentage, Service,ServiceOffering, AssignmentGroup,Priority | table availability_percentage, Service,ServiceOffering, AssignmentGroup,Priority       if the percentage is less than 95% then we will trigger an email and create a ServiceNow incident through the row that is returned in the Splunk search but in case the index didn't receive the data per hour due to some error, how to check that and still return a dummy result only if no results are returned but not to return the dummy result in the append-pipe section  in the case where availability is less than 95%      
Hi All, After splunk upgrade from 8.0 to 9.0.2 , i am facing the slowness in alerting to create ticket . Can anyone help on this ?   Thanks,
We installed a .NET agent on a website and soon after the website shut down with the following error, "NET Runtime version 2.0.50727.8964 - Fatal Execution Engine Error (5ABCE2D2) (80131506)" The ma... See more...
We installed a .NET agent on a website and soon after the website shut down with the following error, "NET Runtime version 2.0.50727.8964 - Fatal Execution Engine Error (5ABCE2D2) (80131506)" The machine is running on .NET framework 2.0. Once the agent was uninstalled the website starts working fine. Any ideas on why this might have happened?
Hello, As the title suggests, is there a way to do this in TrackMe with a single Tenant or is this feature only available for subscription? Also, What are other best free alternatives to TrackMe. T... See more...
Hello, As the title suggests, is there a way to do this in TrackMe with a single Tenant or is this feature only available for subscription? Also, What are other best free alternatives to TrackMe. Thank you, Best Regards,
My team is moving from working directly in splunk to a Git based deploy where we modify the app files directly. Previously we would create and save a search directly in the Splunk web UI but now add ... See more...
My team is moving from working directly in splunk to a Git based deploy where we modify the app files directly. Previously we would create and save a search directly in the Splunk web UI but now add to the savedsearches.conf. Is there a method for creating a saved search in the web app and extracting the sintax for savedsearachs.conf or another tool for helping this process?  I've read the documentation for savedsearches.conf and much of the variables are niche in their use. Thanks for the help! -Mitch
We ran the EventHub integration  on HF , after some time we want to move the App to another HF  How I can configure start time in order to avoid duplicates  [splunk@ilissplfwd11 local]$ cat input... See more...
We ran the EventHub integration  on HF , after some time we want to move the App to another HF  How I can configure start time in order to avoid duplicates  [splunk@ilissplfwd11 local]$ cat inputs.conf [mscs_azure_event_hub://amdocsazureadlogs] account = splunk consumer_group = $Default event_hub_name = eventhub-name event_hub_namespace = eventhub-name.servicebus.windows.net index = amdocsazureadlogs interval = 15 max_batch_size = 3000 max_wait_time = 10 sourcetype = mscs:azure:eventhub use_amqp_over_websocket = 1 [splunk@ilissplfwd11 local]$   what will be the best configuration to handle a big amount of data  (interval/max_batch_size etc) 
The Splunk Threat Research Team (STRT) has had 3 releases of the Enterprise Security Content Update (ESCU) app within the last month (v3.57.0, v3.58.0, and v3.59.0). With these releases, there are 46... See more...
The Splunk Threat Research Team (STRT) has had 3 releases of the Enterprise Security Content Update (ESCU) app within the last month (v3.57.0, v3.58.0, and v3.59.0). With these releases, there are 46 new detections and 7 new analytic stories now available in Splunk Enterprise Security via the ESCU application update process or via Splunk Security Essentials (SSE). Release highlights include: Detections that enable users to hunt for the presence of AsyncRAT Detections to identify compromised user accounts on Azure AD and AWS Detections to identify events and actions that could be attributed to the Chaos and LockBit ransomware New Analytic Stories: AsyncRAT Compromised User Account Swift Slicer Windows Certificate Services Chaos Ransomware LockBit Ransomware Splunk Vulnerabilities New Detections: AWS AD New MFA Method Registered For User AWS Concurrent Sessions From Different Ips AWS High Number Of Failed Authentications For User AWS High Number Of Failed Authentications From Ip AWS Password Policy Changes AWS Successful Console Authentication From Multiple IPs Azure AD Concurrent Sessions From Different Ips Azure AD High Number Of Failed Authentications For User Azure AD High Number Of Failed Authentications From Ip Azure AD New MFA Method Registered For User Azure AD Successful Authentication From Different Ips Detect suspicious process names using a pre trained model in DSDL Driver Inventory LOLBAS With Network Traffic (External contributor - @nterl0k) Windows Data Destruction Recursive Exec Files Deletion Windows Export Certificate Windows Powershell Cryptography Namespace Windows PowerShell Export Certificate Windows PowerShell Export PfxCertificate Windows Scheduled Task with Highest Privileges Windows Spear Phishing Attachment Onenote Spawn Mshta Windows Spear Phishing Attachment Connect to None MS Office Domain Windows Steal Authentication Certificates Certificate Issued Windows Steal Authentication Certificates Certificate Request Windows Steal Authentication Certificates CertUtil Backup Windows Steal Authentication Certificates CS Backup Windows Steal Authentication Certificates Export Certificate Windows Steal Authentication Certificates Export PfxCertificate Detect suspicious DNS TXT records using pretrained model in DSDL Windows Boot or Logon Autostart Execution in Startup Folder Windows Modify Registry Default Icon Setting Windows Phishing PDF File Executes URL Link Windows Replication Through Removable Media Windows User Execution Malicious URL Shortcut File Windows Vulnerable Driver Loaded Linux Ngrok Reverse Proxy Usage Windows Server Software Component GACUtil Install to GAC Windows PowerShell Add Module to Global Assembly Cache Windows Credential Dumping LSASS Memory Createdump Splunk csrf in the ssg kvstore Client Endpoint Splunk Improperly Formatted Parameter Crashes splunkd Persistent XSS in RapidDiag through User Interface Views Splunk Risky Command Abuse Disclosed February 2023 Splunk Unnecessary File Extensions Allowed by Lookup Table Uploads Splunk XSS via View Splunk List All Nonstandard Admin Accounts For all our tools and security content, please visit research.splunk.com. — The Splunk Threat Research Team      
I setup a new monitor on a Json file last week to add the contents to a new index.  Once I got finished the new index would not show any events.  I messed with it for 4 days until I decided to just u... See more...
I setup a new monitor on a Json file last week to add the contents to a new index.  Once I got finished the new index would not show any events.  I messed with it for 4 days until I decided to just use an older Index that was built at some point before I joined the company. I have no idea on the approx age of this index other than the earliest index was 7 months ago.   I know an upgrade was done since then. The issue I seem to be facing is that any new index I create is not getting data but if I user an older one it works.  I don't even know where to begin on trying to solve this so any input is appreciated.  I did see in splunkd log something about a "string index out of range" and found a solution to go and basically increase MAX_SEGMENT = 1024 - change it to MAX_SEGMENT = 4096 in $SPLUNK_HOME/bin/scrubber.py.     That did not fix anything.  Thank you!  
I'm trying to deploy the Splunk UF on Windows Server 2019 boxes. It fails giving me an message that the forwader installation wizard ended prematurely. I have the following MSI log. MSI (s) (F0:64) ... See more...
I'm trying to deploy the Splunk UF on Windows Server 2019 boxes. It fails giving me an message that the forwader installation wizard ended prematurely. I have the following MSI log. MSI (s) (F0:64) [05:57:42:567]: Note: 1: 2203 2: C:\Windows\Installer\inprogressinstallinfo.ipi 3: -2147287038 MSI (s) (F0:64) [05:57:42:573]: Machine policy value 'LimitSystemRestoreCheckpointing' is 0 MSI (s) (F0:64) [05:57:42:573]: Note: 1: 1715 2: UniversalForwarder MSI (s) (F0:64) [05:57:42:573]: Note: 1: 2205 2: 3: Error MSI (s) (F0:64) [05:57:42:573]: Note: 1: 2228 2: 3: Error 4: SELECT `Message` FROM `Error` WHERE `Error` = 1715 MSI (s) (F0:64) [05:57:42:573]: Calling SRSetRestorePoint API. dwRestorePtType: 0, dwEventType: 102, llSequenceNumber: 0, szDescription: "Installed UniversalForwarder". MSI (s) (F0:64) [05:57:42:573]: The call to SRSetRestorePoint API failed. Returned status: 0. GetLastError() returned: 127 MSI (s) (F0:64) [05:57:42:577]: File will have security applied from OpCode. MSI (s) (F0:64) [05:57:42:674]: SOFTWARE RESTRICTION POLICY: Verifying package --> 'C:\Users\Administrator\Downloads\splunkforwarder-9.0.4-de405f4a7979-x64-release.msi' against software restriction policy MSI (s) (F0:64) [05:57:42:674]: SOFTWARE RESTRICTION POLICY: C:\Users\Administrator\Downloads\splunkforwarder-9.0.4-de405f4a7979-x64-release.msi has a digital signature MSI (s) (F0:64) [05:57:43:406]: SOFTWARE RESTRICTION POLICY: C:\Users\Administrator\Downloads\splunkforwarder-9.0.4-de405f4a7979-x64-release.msi is permitted to run at the 'unrestricted' authorization level. MSI (s) (F0:64) [05:57:43:406]: Creating MSIHANDLE (375) of type 790542 for thread 4708 MSI (s) (F0:64) [05:57:43:406]: MSCOREE not loaded loading copy from system32 MSI (s) (F0:64) [05:57:43:406]: End dialog not enabled MSI (s) (F0:64) [05:57:43:406]: Original package ==> C:\Users\Administrator\Downloads\splunkforwarder-9.0.4-de405f4a7979-x64-release.msi MSI (s) (F0:64) [05:57:43:406]: Package we're running from ==> C:\Windows\Installer\12c17059.msi MSI (s) (F0:64) [05:57:43:422]: APPCOMPAT: Compatibility mode property overrides found. MSI (s) (F0:64) [05:57:43:422]: APPCOMPAT: looking for appcompat database entry with ProductCode '{6C243C23-42E6-46E7-AECC-81428601A55E}'. MSI (s) (F0:64) [05:57:43:422]: APPCOMPAT: no matching ProductCode found in database. MSI (s) (F0:64) [05:57:43:422]: Machine policy value 'TransformsSecure' is 1 MSI (s) (F0:64) [05:57:43:422]: Machine policy value 'DisablePatch' is 0 MSI (s) (F0:64) [05:57:43:422]: Machine policy value 'AllowLockdownPatch' is 0 MSI (s) (F0:64) [05:57:43:422]: Machine policy value 'DisableLUAPatching' is 0 MSI (s) (F0:64) [05:57:43:422]: Machine policy value 'DisableFlyWeightPatching' is 0 MSI (s) (F0:64) [05:57:43:422]: Enabling baseline caching for this transaction since all active patches are MSI 3.0 style MSPs or at least one MSI 3.0 minor update patch is active MSI (s) (F0:64) [05:57:43:422]: APPCOMPAT: looking for appcompat database entry with ProductCode '{6C243C23-42E6-46E7-AECC-81428601A55E}'. MSI (s) (F0:64) [05:57:43:422]: APPCOMPAT: no matching ProductCode found in database. MSI (s) (F0:64) [05:57:43:422]: Transforms are not secure. MSI (s) (F0:64) [05:57:43:422]: PROPERTY CHANGE: Adding MsiLogFileLocation property. Its value is 'C:\Users\Administrator\Downloads\msiexec.log'. MSI (s) (F0:64) [05:57:43:422]: Command Line: INSTALLDIR=C:\Program Files\SplunkUniversalForwarder\ TARGETDIR=C:\ AGREETOLICENSE=Yes GENRANDOMPASSWORD=0 CURRENTDIRECTORY=C:\Users\Administrator\Downloads CLIENTUILEVEL=0 CLIENTPROCESSID=4760 USERNAME=Windows User SOURCEDIR=C:\Users\Administrator\Downloads\ ACTION=INSTALL EXECUTEACTION=INSTALL ROOTDRIVE=C:\ INSTALLLEVEL=1 SECONDSEQUENCE=1 WIXUI_INSTALLDIR_VALID=1 MONITOR_PATH=C:\Windows\NTDS RECEIVING_INDEXER=172.16.1.3:9997 WINEVENTLOG_APP_ENABLE=1 WINEVENTLOG_SEC_ENABLE=1 WINEVENTLOG_SYS_ENABLE=1 WINEVENTLOG_FWD_ENABLE=0 WINEVENTLOG_SET_ENABLE=0 ENABLEADMON=1 LOGON_PASSWORD=********** LOGON_USERNAME=splunk SPLUNKPASSWORD=********** SPLUNKUSERNAME=********** DEPLOYMENT_SERVER=172.16.1.3:8089 ADDLOCAL=Complete ACTION=INSTALL MSI (s) (F0:64) [05:57:43:422]: PROPERTY CHANGE: Adding PackageCode property. Its value is '{405F297E-93B0-496F-AD0C-D7EAA614048F}'. MSI (s) (F0:64) [05:57:43:422]: Product Code passed to Engine.Initialize: '' MSI (s) (F0:64) [05:57:43:422]: Product Code from property table before transforms: '{6C243C23-42E6-46E7-AECC-81428601A55E}' MSI (s) (F0:64) [05:57:43:422]: Product Code from property table after transforms: '{6C243C23-42E6-46E7-AECC-81428601A55E}' MSI (s) (F0:64) [05:57:43:422]: Product not registered: beginning first-time install MSI (s) (F0:64) [05:57:43:422]: Package name extracted from package path: 'splunkforwarder-9.0.4-de405f4a7979-x64-release.msi' MSI (s) (F0:64) [05:57:43:422]: Package to be registered: 'splunkforwarder-9.0.4-de405f4a7979-x64-release.msi' MSI (s) (F0:64) [05:57:43:422]: Note: 1: 2205 2: 3: Error MSI (s) (F0:64) [05:57:43:422]: Note: 1: 2262 2: AdminProperties 3: -2147287038 MSI (s) (F0:64) [05:57:43:422]: Machine policy value 'DisableMsi' is 1 MSI (s) (F0:64) [05:57:43:422]: Machine policy value 'AlwaysInstallElevated' is 0 MSI (s) (F0:64) [05:57:43:422]: User policy value 'AlwaysInstallElevated' is 0 MSI (s) (F0:64) [05:57:43:422]: Product installation will be elevated because user is admin and product is being installed per-machine. MSI (s) (F0:64) [05:57:43:422]: Running product '{6C243C23-42E6-46E7-AECC-81428601A55E}' with elevated privileges: Product is assigned. MSI (s) (F0:64) [05:57:43:422]: PROPERTY CHANGE: Adding INSTALLDIR property. Its value is 'C:\Program Files\SplunkUniversalForwarder\'. MSI (s) (F0:64) [05:57:43:422]: PROPERTY CHANGE: Adding TARGETDIR property. Its value is 'C:\'. InstallFiles: File: Copying new files, Directory: , Size: MSI (s) (F0:64) [05:57:46:653]: Note: 1: 2205 2: 3: Patch MSI (s) (F0:64) [05:57:46:653]: Note: 1: 2228 2: 3: Patch 4: SELECT `Patch`.`File_`, `Patch`.`Header`, `Patch`.`Attributes`, `Patch`.`Sequence`, `Patch`.`StreamRef_` FROM `Patch` WHERE `Patch`.`File_` = ? AND `Patch`.`#_MsiActive`=? ORDER BY `Patch`.`Sequence` MSI (s) (F0:64) [05:57:46:653]: Note: 1: 2205 2: 3: Error MSI (s) (F0:64) [05:57:46:653]: Note: 1: 2228 2: 3: Error 4: SELECT `Message` FROM `Error` WHERE `Error` = 1302 MSI (s) (F0:64) [05:57:46:653]: Note: 1: 2205 2: 3: MsiSFCBypass MSI (s) (F0:64) [05:57:46:653]: Note: 1: 2228 2: 3: MsiSFCBypass 4: SELECT `File_` FROM `MsiSFCBypass` WHERE `File_` = ? MSI (s) (F0:64) [05:57:46:653]: Note: 1: 2205 2: 3: MsiPatchHeaders MSI (s) (F0:64) [05:57:46:653]: Note: 1: 2228 2: 3: MsiPatchHeaders 4: SELECT `Header` FROM `MsiPatchHeaders` WHERE `StreamRef` = ? MSI (s) (F0:64) [05:57:46:653]: Note: 1: 2205 2: 3: PatchPackage MSI (s) (F0:64) [05:57:46:653]: Note: 1: 2205 2: 3: MsiPatchHeaders MSI (s) (F0:64) [05:57:46:653]: Note: 1: 2205 2: 3: PatchPackage Action ended 5:57:46: InstallFiles. Return value 1.  
I am sending some traces from my service to Splunk using the OpenTelemetry Collector and the Splunk HEC exporter. My traces are getting to Splunk and their fields in general properly identified, bu... See more...
I am sending some traces from my service to Splunk using the OpenTelemetry Collector and the Splunk HEC exporter. My traces are getting to Splunk and their fields in general properly identified, but I would like for the attributes of an event that have a json format to be further decomposed into fields. This is an example of an event: I would like for the `attributes.data` field to be further decomposed. Is that possible?