All Topics

Top

All Topics

I want to repeat same alert 3 times, 5 minutes apart like morning call. please let me know How can I do it. Can I organize the logic into queries? or is there any alert option for it?   this is m... See more...
I want to repeat same alert 3 times, 5 minutes apart like morning call. please let me know How can I do it. Can I organize the logic into queries? or is there any alert option for it?   this is my query for alert event.       index="main" sourcetype="orcl_sourcetype" | sort by _time | tail 1 | where CNT < 10        
We have added custom snippet to track additional information like User and SAP FIORI application details. Whenever there is a Script error or AJAX error while loading an application, data that captur... See more...
We have added custom snippet to track additional information like User and SAP FIORI application details. Whenever there is a Script error or AJAX error while loading an application, data that captured by custom snippet was not reflecting on to EUM dashboard. Sometimes the data is not collected even if there are no errors. We are not able to identify any particular reason for this inconsistency. Below is the code we have added in the custom snippet. <script charset="UTF-8" type="text/javascript"> window["adrum-start-time"] = new Date().getTime(); function getCustInfo() { if (!!sap) { var userId = sap.ushell.Container.getService("UserInfo").getUser().getId(); var userName = sap.ushell.Container.getService("UserInfo").getUser().getFullName(); if(sap.ushell.services.AppConfiguration.getCurrentApplication() != undefined) { var AppTitle = sap.ushell.services.AppConfiguration.getCurrentApplication().text; var CompID = sap.ushell.services.AppConfiguration.getCurrentApplication().applicationDependencies.name; if (sap.ushell.services.AppConfiguration.getCurrentApplication().reservedParameters['sap-fiori-id'] == undefined) { var AppDevType = 'Custom' } else { var AppDevType = 'SAP' var AppID = sap.ushell.services.AppConfiguration.getCurrentApplication().reservedParameters['sap-fiori-id'][0]; } } } return { "userId": userId, "userName": userName, "AppTitle": AppTitle, "CompID": CompID, "AppDevType": AppDevType, "SIB_APPID": AppID } } window['adrum-config'] = { userEventInfo: { "PageView" : function(context){ return { userData: getCustInfo() } }, "Ajax": function(context) { return { userData: getCustInfo() } }, "VPageView" : function(context){ return { userData: getCustInfo() } } } }; (function(config){ config.appKey = "AD-AAB-ACE-TNP"; config.adrumExtUrlHttp = "http://cdn.appdynamics.com"; config.adrumExtUrlHttps = "https://cdn.appdynamics.com"; config.beaconUrlHttp = "http://pdx-col.eum-appdynamics.com"; config.beaconUrlHttps = "https://pdx-col.eum-appdynamics.com"; config.useHTTPSAlways = true; config.resTiming = {"bufSize":200,"clearResTimingOnBeaconSend":true}; config.maxUrlLength = 512; config.Isabapapp = true; config.page = { "title" : function title() { return document.title; } } })(window["adrum-config"] || (window["adrum-config"] = {})); </script> <script src="//cdn.appdynamics.com/adrum/adrum-23.3.0.4265.js"></script> Any help would be appreciated. Thanks!  
Hello All,  I need to convert the Timeline with different times into one. For example: 12:05AM 12:10AM 12:15AM should be  taken as 12AM 1:05AM 1:10AM 1:15AM should be  taken as 1AM and vice versa... See more...
Hello All,  I need to convert the Timeline with different times into one. For example: 12:05AM 12:10AM 12:15AM should be  taken as 12AM 1:05AM 1:10AM 1:15AM should be  taken as 1AM and vice versa. Can you please help me to write a query for this. Timeline Top 10 Values Count %   01:10:02 AM 2 0.368%   01:20:02 PM 2 0.368%   01:30:02 AM 2 0.368%   01:35:02 PM 2 0.368%   01:45:02 PM 2 0.368%   01:50:02 AM 2 0.368%   02:05:02 PM 2 0.368%   02:10:02 PM 2 0.368%   02:40:02 PM 2 0.368%   03:05:02 PM Thank you. 
Hi All,  How do you customize the table width of results of custom search from a drilldown? I am not able to find any documentation on this. 
I'm not a programmer but I am trying to get the display of my graph to depict "No Results" or "N/A" when the Where command can't find the specific name within the csv. Rather what I get is all of the... See more...
I'm not a programmer but I am trying to get the display of my graph to depict "No Results" or "N/A" when the Where command can't find the specific name within the csv. Rather what I get is all of the servers listed within the excel. Here is quick example: This works for me index=House sourcetype=LivingRoom [ | inputlookup HouseInventory.csv | where Room="Bathroom" | return host=$X_Furniture ] | timechart span=5m count by host But what happens is if a user types "where Room="Bathr00mZ"....see below......I get a list of all the servers listed in my csv which is what I don't want. I rather have it say "No Results" or "N/A" index=House sourcetype=LivingRoom [ | inputlookup HouseInventory.csv | where Room="Bathr00mZ" | return host=$X_Furniture ] | timechart span=5m count by host I've tried this: index=House sourcetype=LivingRoom [ | inputlookup HouseInventory.csv | where Room="Bathr00mZ" | eval res=if(Room=="Bathroom",X_Furniture,"Null") ] | timechart span=5m count by host But this still comes back with the list of all the servers. 
I customize a dashboard page and I put a submit button on it.How can I use the Javascript monitor the button's click to send a request to Splunk and have Splunk execuse a SPL? This is my Js code: ... See more...
I customize a dashboard page and I put a submit button on it.How can I use the Javascript monitor the button's click to send a request to Splunk and have Splunk execuse a SPL? This is my Js code:   require([ "jquery", ], function ($) { $(document).on('click', '#btn_submit', function () { setTimeout(function time() { var temp_a = document.getElementById('temp_a').value var temp_b = document.getElementById('temp_b').value }, 100); }); });   and the dashboard source code is:   <dashboard script="test.js"> <label>test_js_action</label> <row> <panel> <html> <div> <button id="btn_submit">submit</button> </div> </html> </panel> </row> </dashboard>   By the way, I saw a sample using the splunkjs/mvc to send request ,but I cant't get whole code. only know the Js head is:   require([ "jquery", "splunkjs/mvc", "splunkjs/mvc/simplexml/ready!" ], function ($, mvc) {   Thank you very much if you could provide a solution.
Hi, how can we reset password for admin user from CLI. Currently i have indexer using Splunk 9.1.1 in testing environment and i forgot the username and password. There were some bin command that will... See more...
Hi, how can we reset password for admin user from CLI. Currently i have indexer using Splunk 9.1.1 in testing environment and i forgot the username and password. There were some bin command that will prompt for Splunk username and password, so i need to reset the username and password. Please help. Thank you.
I am very new using Splunk but I am enjoying it a lot so far. I am being tasked with writing a document on how to verify that all Domain Controller's logs are going into Splunk for the SecOps team t... See more...
I am very new using Splunk but I am enjoying it a lot so far. I am being tasked with writing a document on how to verify that all Domain Controller's logs are going into Splunk for the SecOps team to action on a daily basis. Can someone please point to a good document on this process? Thank you in advance! 
Hello, regarding filtering Splunk roles, we would like to only allow transforming commands (stats, timechart...) for users on a specific search head. This search head is not part of the cluster, onl... See more...
Hello, regarding filtering Splunk roles, we would like to only allow transforming commands (stats, timechart...) for users on a specific search head. This search head is not part of the cluster, only querying clustered indexers. The aim is to avoid specific users from accessing raw indexes data, only show statistics. At the moment we use summary indexing in local index by scheduling reports with sistats or sitimechart but it's long and heavy to convert searches. Thanks for your help.
We are scanning our splunk enterprise instance with AIDE for linux and have a decent set of exclusions defined otherwise it is VERY noisy with findings. We are still getting quite a bit of noise from... See more...
We are scanning our splunk enterprise instance with AIDE for linux and have a decent set of exclusions defined otherwise it is VERY noisy with findings. We are still getting quite a bit of noise from things like installed apps or add ons in seemingly benign files. Is there a recommended AIDE configuration for Splunk that will focus it only on the 'important' files. We don't want to too broadly just exclude top level directories so if this has been solved, I would love to hear about your aide.conf exclusion settings for splunk.
When should I opt to use Business Metrics and how do I configure them?  Organizations commonly track KPIs like Total Revenue, Order Count, etc. to monitor the health of their business. These KPIs ... See more...
When should I opt to use Business Metrics and how do I configure them?  Organizations commonly track KPIs like Total Revenue, Order Count, etc. to monitor the health of their business. These KPIs influence the organization's spending decisions and allow the analysis of the impact of their spending. It is also desirable to track these KPIs in near real-time rather than waiting for periodic reports. Recently, Cisco AppDynamics released support for Business Metrics as part of the Cisco Cloud Observability Platform. This brings about a powerful capability for business owners to monitor application performance. It also enables users to slice business metric data into segments and set up health rules. In this article… Why Business Metrics? How do Business Metrics work? Business Metrics best practices Additional resources Why Business Metrics? The obvious question that arises is: what is the need for the feature and why can’t Custom Metrics or Span Metrics do the job? Custom Metrics are emitted from the application and are usually aggregated in a time series for presentation. The measurements are usually generated in the application with the help of an OpenTelemetry Metrics SDK and usually have dimensions attached. The measurements are aggregated in a time series fashion within an observability solution. Span Metrics are emitted by a custom, which can be used to scrape span attributes to generate metric measurements with the required dimensions. This approach also allows you to emit metric measurements with the desired dimensions. It can potentially be used to build a custom collector that emits the measurements by processing the entire trace. However, it may require code changes and multiple iterations to get the desired functionality in place. Custom Metrics or Span Metrics may serve your purpose very well if the metric of interest and the associated dimensions originate from a single span and you know which span to find it in. However, in modern distributed architectures in the cloud, an application consists of multiple microservices. A single business transaction can have spans across 10s or 100s of microservices. In such deployments, it is not easy to correlate different dimensions for the same metric across spans. Business Metrics in Cisco Cloud Observability solves this problem by traversing business transaction spans and collecting the metric and all its dimensions as part of a single measurement. This can be extremely useful as users can now slice and dice metric data based on the dimensions to gain deep insights into the performance of their business. Business Metrics also has an easy configuration experience with prescriptive templates for business owners to easily configure the metrics for their use cases. Another advantage of this approach is that the user also gets a visual data preview during the configuration, allowing them to select the attribute for generating measurements as well as the dimensions to segment the data. Back to TOC How do Business Metrics work? Let us walk through a use case based on the sample OpenTelemetry demo application. You can follow this by getting access to AppDynamics Cloud Observability integration. Business Use case: Track the Total Revenue for this e-commerce application and then further analyze the data based on shipping zip code. 1. Instrumentation | 2. Configuration | 3. Analysis of metric data | 4. Health rule setup | 5. Health alert analysis Step 1: Instrumentation The business owner collaborates with developers to instrument the checkout and shipping service. This involves instrumenting order amount and zip code information to be sent as part of the Open Telemetry span attributes as follows: Figure 1 Instrumentation of checkout service with app.order.amount attribute on line 276 Figure 2 Instrumentation of shipping service with app.shipping.zip_code attribute on line 77 The above changes result in the app.order.amount and app.shipping.zip_code being added as span attributes for the respective spans for checkout and shipping services. You can visualize the position of those attributes below: Figure 3 Service map for opentelemetry-demo app. Attributes of interest annotated in red. Figure 4 Trace view of checkout business transaction. Attributes of interest annotated in red. While processing the trace in the cloud, the attributes are scraped and presented for configuration. Step 2: Configuration The DevOps owner goes to the checkout business transaction page, navigates to the business metric section, and begins configuration. The “Sum” template is selected for the use case and the metric attribute is selected. The list of metric attributes and data type is generated using the attributes scraped from the trace. NOTE | The attributes with only summable data type are displayed in the list. The attribute for the zip code is a string and not listed here. Figure 5 Metric attribute list presented to the user for configuration. After selecting the desired title for the metric template, the segmentation attributes are selected. In this case, the attribute of interest is zip code, so that is selected. This completes the configuration for the selected use case. Figure 6 Select option for segmentation of metric data. Step 3: Analysis of metric data The business owner can now use the configured metric for business analysis. The metric appears on the business transaction page in the Business Metrics section. The business owner can examine the metric data on a timeline as per the global time selector. There are options to show a 15-day, 30-day, or 90-day baseline. For analysis based on segments, the user clicks “Show Segments” to analyze revenue based on zip codes. The individual values can be analyzed by selecting them on the x-axis. Figure 7 Animation showing business metric analysis. Step 4: Health rule setup Users can set up health rules to ensure the metric is in the expected range. They should follow the same health rule setup flow as for any other business transaction metric. Figure 8 Health rule configuration based on BT entity and associated business metric. Figure 9 Setting up evaluation condition based on deviation from baseline. Step 5: Analysis of health alerts Based on the health rules set up above, the user gets notified of any health rule violation. The user can go to the business transaction page and click on the health rule violation widget that displays the violating metric. Figure 10 On clicking the health violation timeline, the violating metric Total Revenue is displayed. The user can further scroll down to find the cause for the drop by investigating the performance data. This single pane of glass that correlates business metrics with performance data allows the user to find business-affecting performance issues and prioritize them. In this example, the increase in Average Response Time is correlated with the drop in Total Revenue. Figure 11 Note that the revenue is trending lower as the average response time increased. Back to TOC Best practices During the instrumentation of attributes on spans, we recommend adding a prefix to business attribute names that are distinct from those found in the OTel semantic convention. This will help you to find business attributes during configuration. For example: In the OTel demo, the app.* prefix helps in finding the business attributes. We recommend assigning unique attribute names on each span. While gathering attributes from spans of a trace, if a duplicate attribute name is seen, then the latest value is reported. Figure 12 Attributes with app.* prefix are business attributes and http.* prefix are auto-instrumented attributes. While choosing the attributes as dimensions for segmentation, we recommend selecting attributes that don’t have an exceedingly high cardinality. This makes it easier to analyze the data in a segmented view. Additional resources See Configure Business Metrics, under Application Performance Management, Business Transactions in the documentation
Good Afternoon, Currently, I'm submitting this message for help in regards to editing the font color for all labels introduced within a Pie chart via a created panel within Splunk Studio. Is there ... See more...
Good Afternoon, Currently, I'm submitting this message for help in regards to editing the font color for all labels introduced within a Pie chart via a created panel within Splunk Studio. Is there a method of changing the font color? I'm looking through the documentation and found a URL link for all the possible source commands to be utilized within the Pie chart. One command in particular is called seriesColors. I'm still fairly new to Splunk so I do not have any acquired expertise for editing pie charts here. Thank you
I'm trying to understand the API usage - Internal and Public, basic auth vs token based -  in our controllers so they are appropriately sized and there is no performance bottle necks, how do I get th... See more...
I'm trying to understand the API usage - Internal and Public, basic auth vs token based -  in our controllers so they are appropriately sized and there is no performance bottle necks, how do I get these stats? I want to filter out the Internal API volume from Public and explore the possibility of moving some of these APIs to APIM. Also, Is it possible to move all the external/public api's to a different port to manage them better?
Scenario: I have a searchhead and two idx in a cluster. there is an index (index_a) defined in the cluster. Until now I always deployed a copy of the indexes.conf with a mock index on the SH, for exa... See more...
Scenario: I have a searchhead and two idx in a cluster. there is an index (index_a) defined in the cluster. Until now I always deployed a copy of the indexes.conf with a mock index on the SH, for example to manage role permissions for it.  This was helpful to show the index in the role definition. However in this deployment there is no such indexes.conf file where index_a is defined on the SH, but the index still shows up in the configuration UI. All instances have Splunk Enterprise 9.0.5.1 installed Problem: I have a new Index that I defined after index_a. It is called index_b. index_b doesn't show up in the roles definition for some reason.  What I tried: I looked up the name of index_a in the config files of the searchhead. The only appearance is in system/local/authorize.conf. I also compared the index definitions on the CM including file permission settings. The two configurations only differ in index name and app. I also set up a test environment with one indexer and one searchhead. I created one index on the IX and it appeared on the SH role definition some time later without me configuring anything. Again I verified if the name of the index appears anyway in the SHs configs, but it didn't. Question: Is there a new feature which makes the mock definitions in the SH obsolete? I am aware that I can solve this with this approach but it appears to be a nicer way to do it like it is done with index_a
Hello, I have the following issue, do you know any solution or workaround? (Or maybe I declared something wrongly...) When using a comma separated field values in MAP within the IN command, it is ... See more...
Hello, I have the following issue, do you know any solution or workaround? (Or maybe I declared something wrongly...) When using a comma separated field values in MAP within the IN command, it is not working from the outer search. But when I write out the value of that outside field, it is recognized.   | makeresults | eval ips="a,c,x" | map [ | makeresults | append [ makeresults | eval ips="a", label="aaa" ] | append [ makeresults | eval ips="b", label="bbb" ] | append [ makeresults | eval ips="c", label="ccc" ] | append [ makeresults | eval ips="d", label="ddd" ] ```| search ips IN ($ips$)``` ```NOT WORKING``` | search ips IN (a,c,x) ```WORKING``` | eval outer_ips=$ips$ ] maxsearches=10    
I've tried to create a support ticket through the web portal, but one drop down is not displaying correctly in the browser, which is blocking my ability to post the request: Splunk Support access to... See more...
I've tried to create a support ticket through the web portal, but one drop down is not displaying correctly in the browser, which is blocking my ability to post the request: Splunk Support access to your company data -  instead of a dropdown, it displays 2 dashes and cannot be selected Because this is a required field, I can't submit a support case. I've tried calling support twice and my phone call has been dropped by the automated system. How can I submit a support ticket?
In the last month, the Splunk Threat Research Team (STRT) has had 2 releases of new security content via the Enterprise Security Content Update (ESCU) app (v4.15.0 and v4.16.0). With these releases, ... See more...
In the last month, the Splunk Threat Research Team (STRT) has had 2 releases of new security content via the Enterprise Security Content Update (ESCU) app (v4.15.0 and v4.16.0). With these releases, there are 20 new detections, 4 new analytic stories, 7 updated analytics, 2 updated behavioral analytics detections, 3 new behavioral analytics detections, 2 updated analytic stories, and  2 deprecated analytics now available in Splunk Enterprise Security via the ESCU application update process. Content highlights include: Malware story that groups 6 new analytics to help detect a new phishing-driven malware campaign distributing DarkGate malware, which utilizes stolen email threads to trick users into downloading malicious payloads via hyperlinks. A new zero-day vulnerability in SysAid On-Prem Software (CVE-2023-47246) that allows attackers to upload a WebShell and other payloads, gaining unauthorized access and control. A new analytic Risk Rule for Dev Sec Ops by Repository that detects by correlating repository and risk score to identify patterns and trends in the data based on the level of risk associated, to provide a comprehensive view of the risk landscape and helps to make informed decisions. Additionally, we released an updated Analytics Story, which groups 10 new analytics to help security operations teams identify the potential compromise of Azure Active Directory accounts. A critical security update, CVE-2023-4966, for the NetScaler Application Delivery Controller (ADC) and NetScaler Gateway. This vulnerability, if exploited, can lead to unauthorized data disclosure and possibly session hijacking. Along with "PlugX RAT" or "Kaba" known as the "silent infiltrator," it's the go-to tool for sophisticated hackers with one goal in mind: espionage. Additionally, we updated three existing analytics to identify suspicious file creation in the root drive observed in NjRAT, and two vulnerabilities privilege escalation flaws in Atlassian Confluence. New Analytics (20) Azure AD Device Code Authentication Azure AD Tenant Wide Admin Consent Granted Azure AD Multiple AppIDs and UserAgents Authentication Spike Azure AD Block User Consent For Risky Apps Disabled Azure AD User Consent Blocked for Risky Application Azure AD OAuth Application Consent Granted By User Azure AD User Consent Denied for OAuth Application Azure AD New MFA Method Registered Azure AD Multiple Denied MFA Requests For User Azure AD Multi-Source Failed Authentications Spike Risk Rule for Dev Sec Ops by Repository Windows ConHost with Headless Argument Windows CAB File on Disk Windows WinDBG Spawning AutoIt3 Windows MSIExec Spawn WinDBG Windows Modify Registry Default Icon Setting Windows AutoIt3 Execution Splunk App for Lookup File Editing RCE via User XSLT Splunk XSS in Highlighted JSON Events Citrix ADC and Gateway Unauthorized Data Disclosure New Analytic Stories (4)  DarkGate Malware SysAid On-Prem Software CVE-2023-47246 Vulnerability Citrix NetScaler ADC and NetScaler Gateway CVE-2023-4966 PlugX Updated Analytics (7) AWS ECR Container Scanning Findings High AWS ECR Container Scanning Findings Medium AWS ECR Container Scanning Findings Low Informational Unknown AWS ECR Container Upload Outside Business Hours  Windows Admin Permission Discovery Confluence CVE-2023-22515 Trigger Vulnerability Confluence Data Center and Server Privilege Escalation Updated Behavioral Analytics Detections (2) All BA detections updated to use IN command in SPLv2 instead of using multiple ORs in the detection analytic Added a new key detection_type = STREAMING in the generated BA yaml files New Behavioral Analytics Detections (3) Detect Prohibited Applications Spawning cmd exe browsers (validation) Detect Prohibited Applications Spawning cmd exe office (validation) Detect Prohibited Applications Spawning cmd exe powershell (validation) Updated Analytic Stories (2) Azure Active Directory Account Takeover Splunk Vulnerabilities Deprecated Analytics (2)     Correlation by Repository and Risk Correlation by User and Risk The team has also published the following blogs:  More Than Just a RAT: Unveiling NjRAT's MBR Wiping Capabilities For all our tools and security content, please visit research.splunk.com.  — The Splunk Threat Research Team  
Hello, I am trying to integrate chatgpt with my dashboard and I am using OpenAPI add on. I am getting the following error code: "HTTP 404 Not Found -- Could not find object id=TA-openai-api:org_i... See more...
Hello, I am trying to integrate chatgpt with my dashboard and I am using OpenAPI add on. I am getting the following error code: "HTTP 404 Not Found -- Could not find object id=TA-openai-api:org_id_default: ERROR cannot unpack non-iterable NoneType object"   Can anyone help me with this?
Hi Guys,   In Splunk a field by name “event_sub_type” has multiple values. We don’t want to ingest any logs into splunk whose field “event_sub_type” value is either “WAN Firewall” or “TLS” (as mark... See more...
Hi Guys,   In Splunk a field by name “event_sub_type” has multiple values. We don’t want to ingest any logs into splunk whose field “event_sub_type” value is either “WAN Firewall” or “TLS” (as marked in attached screen shot) as these are huge unwanted logs.     Our search query is : index=cato sourcetype=cato_source   We tried multiple ways by editing the props.conf and transforms.conf to exclude these logs as below but none of them are successful to exclude those logs;   props.conf [sourcetype::cato_source] TRANSFORMS-filter_logs = cloudparsing   transforms.conf [cloudparsing] REGEX = \"event_sub_type\":\"(WAN Firewall|TLS)\" DEST_KEY = queue FORMAT = nullQueue   Can someone please guide how to exclude these events whose “event_sub_type” value contains either “WAN Firewall” or “TLS” by editing props.conf and transforms.conf?     RAW Events for reference which needs to be excluded ; 1. event_sub_type":"WAN   {"event_count":1,"ISP_name":"Shanghai internet","rule":"Initial Connectivity Rule","dest_is_site_or_vpn":"Site","src_isp_ip":"0.0.0.0","time_str":"2023-11-28T04:27:40Z","src_site":"CHINA-AZURE-E2","src_ip":"0.0.0.1","internalId":"54464646","dest_site_name":"china_112,"event_type":"Security","src_country_code":"CN","action":"Monitor","subnet_name":"cn-001.net-vnet-1","pop_name":"Shanghai_1","dest_port":443,"dest_site":"china_connect","rule_name":"Initial Connectivity Rule","event_sub_type":"WAN Firewall","insertionDate":1701188916690,"ip_protocol":"TCP","rule_id":"101238","src_is_site_or_vpn":"Site","account_id":5555,"application":"HTTP(S)","src_site_name":"china_connect","src_country":"China","dest_ip":"0.0.0.0","os_type":"OS_ANDROID","app_stack""TCP","TLS","HTTP(S)"],"time":1701188860834}   2. "event_sub_type":"TLS","   {"event_count":4,"http_host_name":"isp.vpn","ISP_name":"China_internet","src_isp_ip":"0.0.0.0","tls_version":"TLSv1.3","time_str":"2023-11-28T04:27:16Z","src_site":"china_mtt","src_ip":"0.0.0.0","internalId":"rtrgrtr","domain_name":"china.gh.com","event_type":"Security","src_country_code":"CN","tls_error_description":"unknown CA","action":"Alert","subnet_name":"0.0.0.0/24","pop_name":"china_1","dest_port":443,"event_sub_type":"TLS","insertionDate":1701188915580,"dest_country_code":"SG","tls_error_type":"fatal","dns_name":"china.com","traffic_direction":"OUTBOUND","src_is_site_or_vpn":"Site","account_id":56565,"application":"Netskope","src_site_name":"CHINA-44","src_country":"China","dest_ip":"0.0.0.0","os_type":"OS_WINDOWS","time":1701188836011,"dest_country":"Singapore"}    
I have a log like below displayed in SPlunk UI. I want the "message" key to be parsed into json as well. how to do that? The below is the raw text.       {"stream":"stderr","logtag":"F","message... See more...
I have a log like below displayed in SPlunk UI. I want the "message" key to be parsed into json as well. how to do that? The below is the raw text.       {"stream":"stderr","logtag":"F","message":"{\"Context\":{\"SourceTransactionID\":\"UMV-626036c8-b843-46e8-8ef3-0bd78376bf93\",\"CaseID\":\"UMV-UMV_OK_CAAS_MMR_Mokcup_PIPE_2023-11-28-151036894\",\"CommunicationID\":\"UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f\",\"RequestID\":\"4ea2b9be-752b-4e6f-8972-0c435d1ad282\",\"RecordID\":\"332ebe12-0269-4ae6-90fc-98c8887e3703\"},\"LogCollection\":[{\"source\":\"handler.go:44\",\"timestamp\":\"2023-11-30T15:01:07.209285695Z\",\"msg\":{\"specversion\":\"1.0\",\"type\":\"com.cnc.caas.documentgenerationservices.documentgeneration.completed.public\",\"source\":\"/events/caas/documentgenerationservices/record/documentgeneration\",\"id\":\"Rec#332ebe12-0269-4ae6-90fc-98c8887e3703\",\"time\":\"2023-11-30T15:01:06.972071059Z\",\"subject\":\"record-documentgenerationservices-wip\",\"dataschema\":\"/caas/comp_01_a_events-spec.json\",\"datacontenttype\":\"application/json\",\"data\":{\"CAAS\":{\"Event\":{\"Version\":\"2.0.0\",\"EventType\":\"documentgeneration.completed\",\"LifeCycleStatus\":\"wip\",\"EventSequence\":4,\"OriginTimeStamp\":\"2023-11-30T15:01:06.972Z\",\"SourceName\":\"UMV\",\"SourceTransactionID\":\"UMV-626036c8-b843-46e8-8ef3-0bd78376bf93\",\"CaseID\":\"UMV-UMV_OK_CAAS_MMR_Mokcup_PIPE_2023-11-28-151036894\",\"CommunicationID\":\"UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f\",\"RequestID\":\"4ea2b9be-752b-4e6f-8972-0c435d1ad282\",\"RecordID\":\"332ebe12-0269-4ae6-90fc-98c8887e3703\",\"RequestedDeliveryChannel\":\"Print\",\"RecordedDeliveryChannel\":\"Print\",\"AdditionalData\":{\"CompositionAttributes\":{\"IsOCOENotificationRequired\":true,\"JobID\":47130}},\"S3Location\":{\"BucketName\":\"cnc-caas-csl-dev-smartcomm-output\",\"ObjectKey\":\"output/4ea2b9be-752b-4e6f-8972-0c435d1ad282/47130/4ea2b9be-752b-4e6f-8972-0c435d1ad282_332ebe12-0269-4ae6-90fc-98c8887e3703_UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f_Payload.json\"},\"Priority\":false,\"EventFailedStatus\":0,\"RetryCount\":1,\"Errors\":null,\"OriginalSqsMessage\":{\"data\":{\"CAAS\":{\"Event\":{\"AdditionalData\":{\"CompositionAttributes\":{\"IsOCOENotificationRequired\":true,\"JobID\":47130}},\"CaseID\":\"UMV-UMV_OK_CAAS_MMR_Mokcup_PIPE_2023-11-28-151036894\",\"CommunicationGroupID\":\"mbrmatreqok\",\"CommunicationID\":\"UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f\",\"Errors\":null,\"EventFailedStatus\":0,\"EventSequence\":4,\"EventType\":\"recordcomposition.response.start\",\"LifeCycleStatus\":\"wip\",\"OriginTimeStamp\":\"2023-11-30T15:00:04.996Z\",\"PreRendered\":false,\"Priority\":false,\"RecipientID\":\"68032561\",\"RecipientType\":\"Member\",\"RecordID\":\"332ebe12-0269-4ae6-90fc-98c8887e3703\",\"RecordedDeliveryChannel\":\"Print\",\"RequestID\":\"4ea2b9be-752b-4e6f-8972-0c435d1ad282\",\"RequestedDeliveryChannel\":\"Print\",\"RetryCount\":1,\"S3Location\":{\"BucketName\":\"cnc-caas-csl-dev-smartcomm-output\",\"ObjectKey\":\"output/4ea2b9be-752b-4e6f-8972-0c435d1ad282/47130/4ea2b9be-752b-4e6f-8972-0c435d1ad282_332ebe12-0269-4ae6-90fc-98c8887e3703_UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f_Payload.json\"},\"SourceName\":\"UMV\",\"SourceTransactionID\":\"UMV-626036c8-b843-46e8-8ef3-0bd78376bf93\",\"Version\":\"2.0.0\"}}},\"datacontenttype\":\"application/json\",\"dataschema\":\"/caas/comp_01_a_events-spec.json\",\"id\":\"Rec#332ebe12-0269-4ae6-90fc-98c8887e3703\",\"source\":\"/events/caas/smart/record/composition\",\"specversion\":\"1.0\",\"subject\":\"record-composition-response-start\",\"time\":\"2023-11-30T15:01:05.756937686Z\",\"type\":\"com.cnc.caas.composition.response.start.private\"},\"CommunicationGroupID\":\"mbrmatreqok\",\"RecipientID\":\"68032561\",\"RecipientType\":\"Member\",\"PreRendered\":false}}}}},{\"source\":\"handler.go:46\",\"timestamp\":\"2023-11-30T15:01:07.21572506Z\",\"msg\":\"mongo insert id is 6568a3b3ab042d54478ef071\"}],\"RetryCount\":1,\"level\":\"error\",\"msg\":\"Log collector output\",\"time\":\"2023-11-30T15:01:07Z\"}","kubernetes":{"pod_name":"eventsupdatetomongo-d98bb8594-cnbsd","namespace_name":"caas-composition-layer","pod_id":"50d49842-793a-41c8-a903-11c23607dfd6","labels":{"app":"eventsupdatetomongo","pod-template-hash":"d98bb8594","version":"dcode-801-1.0.1-2745653"},"annotations":{"cattle.io/timestamp":"2023-06-08T22:30:33Z","cni.projectcalico.org/containerID":"58cf3b42ab43fac0a5bf1f97e5a4a7db9dbf6a572705f02480384e63c2a53288","cni.projectcalico.org/podIP":"172.17.224.31/32","cni.projectcalico.org/podIPs":"172.17.224.31/32","kubectl.kubernetes.io/restartedAt":"2023-11-20T17:28:31Z"},"host":"ip-10-168-125-122.ec2.internal","container_name":"eventsupdatetomongo","docker_id":"c83dd87422fbdcae60a40ac50bcad0f387d50f3021975b81dbccac1bc0d965b2","container_hash":"artifactory-aws.centene.com/caas-docker_non-production_local_aws/eventsupdatetomongo@sha256:3b7e5e0908cec3f68baa7f9be18397b6ce4aa807f92b98b6b8970edac9780388","container_image":"artifactory-aws.centene.com/caas-docker_non-production_local_aws/eventsupdatetomongo:dcode-801-1.0.1-2745653"}}