All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks @gcusello  I have amended the changes query but the output of order_code column is still empty. order_code value  "paymentStatusResponse.orderCode" comes from 1 of the 2 logger. logger ... See more...
Thanks @gcusello  I have amended the changes query but the output of order_code column is still empty. order_code value  "paymentStatusResponse.orderCode" comes from 1 of the 2 logger. logger name PaymentStatusClientImpl  
Hi @shanemhartley , ingestion in Splunk is usually done using a Technical Add-On , in your case the Splunk_TA_nix (https://splunkbase.splunk.com/app/833). You have to install this add-on on the Uni... See more...
Hi @shanemhartley , ingestion in Splunk is usually done using a Technical Add-On , in your case the Splunk_TA_nix (https://splunkbase.splunk.com/app/833). You have to install this add-on on the Universal Forwarder enabling the input stanzas you need. If you want to store these logs in a defined index (instead of main), you have also to add to each enabled input stanza the option: index = <your_index> Then you have to install this add-on also on your Search Head or your Stand Alone Splunk Server. In this way you have the logs correctly parsed and usable. For more infos see at https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Getstartedwithgettingdatain and there are also more videos. Ciao. Giuseppe
Hi @super_edition , at first don't use the search command after the main search because your search will be slower: (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_n... See more...
Hi @super_edition , at first don't use the search command after the main search because your search will be slower: (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler" OR logger=PaymentStatusClientImpl") "Did not observe any item or terminal signal within" | spath "paymentStatusResponse.orderCode" | eval clusters=coalesce(openshift_cluster, kubernetes_cluster) | stats values(clusters) as cluster, values(host) as hostname, count(host) as count, values(correlation-id{}) as corr_id, values(paymentStatusResponse.orderCode) as order_code and the asterisk isn't mandatory in a string like your one.  Then review the use of spath command at https://docs.splunk.com/Documentation/Splunk/9.3.1/SearchReference/Spath : (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler" OR logger=PaymentStatusClientImpl") "Did not observe any item or terminal signal within" | spath output=orderCode path=paymentStatusResponse.orderCode | eval clusters=coalesce(openshift_cluster, kubernetes_cluster) | stats values(clusters) as cluster values(host) as hostname count(host) as count values(correlation-id{}) as corr_id values(orderCode) as order_code Ciao. Giuseppe
Hi @mursidehsani , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @jaibalaraman , you have to change the time format in strftime command applying the format you like following the formats at https://docs.splunk.com/Documentation/Splunk/9.3.1/SearchReference/Com... See more...
Hi @jaibalaraman , you have to change the time format in strftime command applying the format you like following the formats at https://docs.splunk.com/Documentation/Splunk/9.3.1/SearchReference/Commontimeformatvariables : | makeresults | eval refresh_time=strftime(_time, "%A,%d/%m/%Y %Z %H:%M:%S") | table refresh_time  Ciao. Giuseppe
Hi @gcusello  It works! Thank you so much for your help.
Hi @mursidehsani, please try this: <your_search> | rex "(?<time>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}).*Ink Type '(?<ink_type>[^']+)'" | stats values(ink_type) AS ink_type BY time | sort - time | he... See more...
Hi @mursidehsani, please try this: <your_search> | rex "(?<time>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}).*Ink Type '(?<ink_type>[^']+)'" | stats values(ink_type) AS ink_type BY time | sort - time | head 1 | mvexpand ink_type Ciao. Giuseppe
Hello Everyone, I have below splunk query which will display the output as below   (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="Paym... See more...
Hello Everyone, I have below splunk query which will display the output as below   (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler" OR logger=PaymentStatusClientImpl") | search "* Did not observe any item or terminal signal within*" | spath "paymentStatusResponse.orderCode" | eval clusters=coalesce(openshift_cluster, kubernetes_cluster) | stats values(clusters) as cluster, values(host) as hostname, count(host) as count, values(correlation-id{}) as corr_id, values(paymentStatusResponse.orderCode) as order_code    From the above query, we have 2 loggers.  In the PaymentErrorHandler logger, I get the message containing: "Did not observe any item or terminal signal within" In the EmsPaymentStatusClientImpl logger, I get the json response object containing "paymentStatusResponse.orderCode" value In both loggers, we have correlation-id{} as common element. I want to output a table containing cluster, hostname, count, corr_id and order_code but the order code is alway empty. Please help  
I have this query is not mapped to ink name | rex "(?<time>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}).*Ink Type '(?<ink_type>[^']+)'" | sort - time | table time ink_type that will have this resul... See more...
I have this query is not mapped to ink name | rex "(?<time>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}).*Ink Type '(?<ink_type>[^']+)'" | sort - time | table time ink_type that will have this result I want the result to be just the latest log date. In this case it will only show the top 3. And when new logs comes in, then it will show that new logs only
Hello everyone could please help me to edit this app for FMC logs 
Is it sending too much data including its own logs?  I think endpoint server is busy, Did you try sending a small batch of events to test on one of those linux servers? Try sending data from just... See more...
Is it sending too much data including its own logs?  I think endpoint server is busy, Did you try sending a small batch of events to test on one of those linux servers? Try sending data from just one UF to isolate if it's a load issue Check if there are any SSL/TLS version mismatches between 9.2.3 and 9.3.x Review this settings if you haven't:  Check outputs.conf Verify inputs.conf https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/Outputsconf#HTTP_Output_stanzas If this Helps, Please UpVote.  
Hi @jto13  Can you please share few sample raw data from which you are trying to extract the fields. If there is any sensitive information,do mask it and then share it .  Also wanted to confirm the... See more...
Hi @jto13  Can you please share few sample raw data from which you are trying to extract the fields. If there is any sensitive information,do mask it and then share it .  Also wanted to confirm the dataflow , is it from HF->indexer ??   
Hi Rick, Thanks for the info, understood on that. For now, we are trying to get it to work first to at least get the fields and maybe understand where we configured it wrong, would there be any prob... See more...
Hi Rick, Thanks for the info, understood on that. For now, we are trying to get it to work first to at least get the fields and maybe understand where we configured it wrong, would there be any problem with our props and transforms.conf that made it unable to work?
Hi, We recently upgraded the Heavy Forwarders (HF) of our Splunk Enterprise. After the upgrade the Universal Forwarders stopped sending data (e.g. Linux logs) to HFs over Http, the logs are not sear... See more...
Hi, We recently upgraded the Heavy Forwarders (HF) of our Splunk Enterprise. After the upgrade the Universal Forwarders stopped sending data (e.g. Linux logs) to HFs over Http, the logs are not searchable on Search head. We upgraded from v9.1.2 to 9.3.0. We also tried 9.3.1 which did not make any difference - logs are not being sent. v9.2.3 works without issues.  I checked the logs on UF on v9.3.x and can see  ERROR S2SOverHttpOutputProcessor [8340 parsing] - HTTP 503 Service Unavailable However I cannot figure out what causes the issue. Telnet from UF to HF works, Telnet form HF to indexers also work. The tokens on the Deployment server and UFs are the same. Please, advise  
Hello, We are in the process of fully migrating our Splunk Enterprise deployment to the Azure Cloud and will no longer be using Splunk Enterprise on-premises. Specifically, I have a question about m... See more...
Hello, We are in the process of fully migrating our Splunk Enterprise deployment to the Azure Cloud and will no longer be using Splunk Enterprise on-premises. Specifically, I have a question about moving the search head and all its associated components to the cloud without causing disruptions. While we found a Work Instruction on the Splunk website, it wasn't clear enough to follow, and we're concerned about minimizing downtime during the migration process. Could anyone provide guidance (step-by-step guidance) or best practices for migrating a Splunk search head and its components to the Azure Cloud, ensuring no service interruptions during the transition? Your help would be greatly appreciated!
Hi guys, I had the same issue with Splunk 9.3.0 and DB Connect 3.8.x. To solve this just has to clear the cashe of the browser. Regards,
Hi All  I running the below command  | makeresults | eval refresh_time=strftime(_time, "%A,%Y-%m-%d %H:%M:%S") | table refresh_time     How to change the position like below... See more...
Hi All  I running the below command  | makeresults | eval refresh_time=strftime(_time, "%A,%Y-%m-%d %H:%M:%S") | table refresh_time     How to change the position like below  Tuesday , 11/05/2024 NZST 22:12:39   Thanks  
Hi All, I'm having trouble getting conditional formatting to work for a column chart in dashboard studio.   I want something pretty simple... I want the column "ImpactLevel" to be colored red if the... See more...
Hi All, I'm having trouble getting conditional formatting to work for a column chart in dashboard studio.   I want something pretty simple... I want the column "ImpactLevel" to be colored red if the value is less than 50, orange if the value is between 50 and 80, and yellow if the value is more than 80.  Impact level is the only series on the y2 axis of the column chart. Here is the json for my chart: "type": "splunk.column",     "options": {         "y": "> primary | frameBySeriesNames('_lower','_predicted','_upper','avg','max','min','volume','ImpactLevel')",         "y2": "> primary | frameBySeriesNames('ImpactLevel')",         "y2AxisMax": 100,         "overlayFields": [             "volume"         ],         "legendDisplay": "bottom",         "seriesColorsByField": {             "ImpactLevel": [                 {                     "value": "#dc4e41",                     "to": 50                 },                 {                     "value": "#f1813f",                     "from": 50,                     "to": 80                 },                 {                     "value": "#f8be44",                     "from": 80                 }             ]         }     },     "dataSources": {         "primary": "ds_9sBnwPWM_ds_stihSmPw"     },     "title": "HP+ Claims E2E",     "showProgressBar": true,     "eventHandlers": [         {             "type": "drilldown.linkToDashboard",             "options": {                 "app": "sre",                 "dashboard": "noc_priority_dashboard_regclaimdrilldown",                 "newTab": true,                 "tokens": [                     {                         "token": "time.latest",                         "value": "$time.latest$"                     },                     {                         "token": "time.earliest",                         "value": "$time.earliest$"                     },                     {                         "token": "span",                         "value": "$span$"                     }                 ]             }         }     ],     "showLastUpdated": false,     "context": {}
Hi there, Missing events from WEF/WEC can be cause by the file size, if too small they rotate away before the UF even has a change to read it .. don't ask how I know   Increasing the size for the... See more...
Hi there, Missing events from WEF/WEC can be cause by the file size, if too small they rotate away before the UF even has a change to read it .. don't ask how I know   Increasing the size for the forwardedevents channel will help resolving this. Hope this helps ... cheers, MuS
Thanks @PickleRick  This was my thinking as well. We're really only doing around 1500 EPS roughly. So unsure why some messages are not making it through and others aren't. Yeah i've looked into t... See more...
Thanks @PickleRick  This was my thinking as well. We're really only doing around 1500 EPS roughly. So unsure why some messages are not making it through and others aren't. Yeah i've looked into the links you've provided previously. the problem is getting a hold of ecmangen.exe as you have to install quite an old Win 10 SDK to access it as it's been removed from all recent SDK's. We're running server 2022 on our WEF Collector.