All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Everyone, I have below splunk query which will display the output as below   (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="Paym... See more...
Hello Everyone, I have below splunk query which will display the output as below   (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler" OR logger=PaymentStatusClientImpl") | search "* Did not observe any item or terminal signal within*" | spath "paymentStatusResponse.orderCode" | eval clusters=coalesce(openshift_cluster, kubernetes_cluster) | stats values(clusters) as cluster, values(host) as hostname, count(host) as count, values(correlation-id{}) as corr_id, values(paymentStatusResponse.orderCode) as order_code    From the above query, we have 2 loggers.  In the PaymentErrorHandler logger, I get the message containing: "Did not observe any item or terminal signal within" In the EmsPaymentStatusClientImpl logger, I get the json response object containing "paymentStatusResponse.orderCode" value In both loggers, we have correlation-id{} as common element. I want to output a table containing cluster, hostname, count, corr_id and order_code but the order code is alway empty. Please help  
I have this query is not mapped to ink name | rex "(?<time>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}).*Ink Type '(?<ink_type>[^']+)'" | sort - time | table time ink_type that will have this resul... See more...
I have this query is not mapped to ink name | rex "(?<time>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}).*Ink Type '(?<ink_type>[^']+)'" | sort - time | table time ink_type that will have this result I want the result to be just the latest log date. In this case it will only show the top 3. And when new logs comes in, then it will show that new logs only
Hello everyone could please help me to edit this app for FMC logs 
Is it sending too much data including its own logs?  I think endpoint server is busy, Did you try sending a small batch of events to test on one of those linux servers? Try sending data from just... See more...
Is it sending too much data including its own logs?  I think endpoint server is busy, Did you try sending a small batch of events to test on one of those linux servers? Try sending data from just one UF to isolate if it's a load issue Check if there are any SSL/TLS version mismatches between 9.2.3 and 9.3.x Review this settings if you haven't:  Check outputs.conf Verify inputs.conf https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/Outputsconf#HTTP_Output_stanzas If this Helps, Please UpVote.  
Hi @jto13  Can you please share few sample raw data from which you are trying to extract the fields. If there is any sensitive information,do mask it and then share it .  Also wanted to confirm the... See more...
Hi @jto13  Can you please share few sample raw data from which you are trying to extract the fields. If there is any sensitive information,do mask it and then share it .  Also wanted to confirm the dataflow , is it from HF->indexer ??   
Hi Rick, Thanks for the info, understood on that. For now, we are trying to get it to work first to at least get the fields and maybe understand where we configured it wrong, would there be any prob... See more...
Hi Rick, Thanks for the info, understood on that. For now, we are trying to get it to work first to at least get the fields and maybe understand where we configured it wrong, would there be any problem with our props and transforms.conf that made it unable to work?
Hi, We recently upgraded the Heavy Forwarders (HF) of our Splunk Enterprise. After the upgrade the Universal Forwarders stopped sending data (e.g. Linux logs) to HFs over Http, the logs are not sear... See more...
Hi, We recently upgraded the Heavy Forwarders (HF) of our Splunk Enterprise. After the upgrade the Universal Forwarders stopped sending data (e.g. Linux logs) to HFs over Http, the logs are not searchable on Search head. We upgraded from v9.1.2 to 9.3.0. We also tried 9.3.1 which did not make any difference - logs are not being sent. v9.2.3 works without issues.  I checked the logs on UF on v9.3.x and can see  ERROR S2SOverHttpOutputProcessor [8340 parsing] - HTTP 503 Service Unavailable However I cannot figure out what causes the issue. Telnet from UF to HF works, Telnet form HF to indexers also work. The tokens on the Deployment server and UFs are the same. Please, advise  
Hello, We are in the process of fully migrating our Splunk Enterprise deployment to the Azure Cloud and will no longer be using Splunk Enterprise on-premises. Specifically, I have a question about m... See more...
Hello, We are in the process of fully migrating our Splunk Enterprise deployment to the Azure Cloud and will no longer be using Splunk Enterprise on-premises. Specifically, I have a question about moving the search head and all its associated components to the cloud without causing disruptions. While we found a Work Instruction on the Splunk website, it wasn't clear enough to follow, and we're concerned about minimizing downtime during the migration process. Could anyone provide guidance (step-by-step guidance) or best practices for migrating a Splunk search head and its components to the Azure Cloud, ensuring no service interruptions during the transition? Your help would be greatly appreciated!
Hi guys, I had the same issue with Splunk 9.3.0 and DB Connect 3.8.x. To solve this just has to clear the cashe of the browser. Regards,
Hi All  I running the below command  | makeresults | eval refresh_time=strftime(_time, "%A,%Y-%m-%d %H:%M:%S") | table refresh_time     How to change the position like below... See more...
Hi All  I running the below command  | makeresults | eval refresh_time=strftime(_time, "%A,%Y-%m-%d %H:%M:%S") | table refresh_time     How to change the position like below  Tuesday , 11/05/2024 NZST 22:12:39   Thanks  
Hi All, I'm having trouble getting conditional formatting to work for a column chart in dashboard studio.   I want something pretty simple... I want the column "ImpactLevel" to be colored red if the... See more...
Hi All, I'm having trouble getting conditional formatting to work for a column chart in dashboard studio.   I want something pretty simple... I want the column "ImpactLevel" to be colored red if the value is less than 50, orange if the value is between 50 and 80, and yellow if the value is more than 80.  Impact level is the only series on the y2 axis of the column chart. Here is the json for my chart: "type": "splunk.column",     "options": {         "y": "> primary | frameBySeriesNames('_lower','_predicted','_upper','avg','max','min','volume','ImpactLevel')",         "y2": "> primary | frameBySeriesNames('ImpactLevel')",         "y2AxisMax": 100,         "overlayFields": [             "volume"         ],         "legendDisplay": "bottom",         "seriesColorsByField": {             "ImpactLevel": [                 {                     "value": "#dc4e41",                     "to": 50                 },                 {                     "value": "#f1813f",                     "from": 50,                     "to": 80                 },                 {                     "value": "#f8be44",                     "from": 80                 }             ]         }     },     "dataSources": {         "primary": "ds_9sBnwPWM_ds_stihSmPw"     },     "title": "HP+ Claims E2E",     "showProgressBar": true,     "eventHandlers": [         {             "type": "drilldown.linkToDashboard",             "options": {                 "app": "sre",                 "dashboard": "noc_priority_dashboard_regclaimdrilldown",                 "newTab": true,                 "tokens": [                     {                         "token": "time.latest",                         "value": "$time.latest$"                     },                     {                         "token": "time.earliest",                         "value": "$time.earliest$"                     },                     {                         "token": "span",                         "value": "$span$"                     }                 ]             }         }     ],     "showLastUpdated": false,     "context": {}
Hi there, Missing events from WEF/WEC can be cause by the file size, if too small they rotate away before the UF even has a change to read it .. don't ask how I know   Increasing the size for the... See more...
Hi there, Missing events from WEF/WEC can be cause by the file size, if too small they rotate away before the UF even has a change to read it .. don't ask how I know   Increasing the size for the forwardedevents channel will help resolving this. Hope this helps ... cheers, MuS
Thanks @PickleRick  This was my thinking as well. We're really only doing around 1500 EPS roughly. So unsure why some messages are not making it through and others aren't. Yeah i've looked into t... See more...
Thanks @PickleRick  This was my thinking as well. We're really only doing around 1500 EPS roughly. So unsure why some messages are not making it through and others aren't. Yeah i've looked into the links you've provided previously. the problem is getting a hold of ecmangen.exe as you have to install quite an old Win 10 SDK to access it as it's been removed from all recent SDK's. We're running server 2022 on our WEF Collector.
Scarily enough, it appears to be enabled by default. At least with 9.3.1, this feature is not enabled by default: search_history_storage_mode = <string> * The storage mode by which a search hea... See more...
Scarily enough, it appears to be enabled by default. At least with 9.3.1, this feature is not enabled by default: search_history_storage_mode = <string> * The storage mode by which a search head cluster saves search history. * Valid storage modes include "csv" and "kvstore". [...] * Default: csv https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/Limitsconf#History
Hi, I know it's bit confusing but when I run my query field Uptime has value 0,00 by _time. It does not matter how many decimals after 0. 
Value 0,00 of which field(s)?
We need more information.  What "splunk" is being installed, Splunk Enterprise, Splunk Universal Forwarder, or something else?  What OS do the devices run? Please elaborate on "we have been unsucces... See more...
We need more information.  What "splunk" is being installed, Splunk Enterprise, Splunk Universal Forwarder, or something else?  What OS do the devices run? Please elaborate on "we have been unsuccessful in getting the CLI commands to work"?  Which commands?  What happens (or doesn't) when you use them?  What error messages do you get? How are the devices managed?  Many sites use their existing management software (SCCM, BigFix, etc.) to deploy Splunk UFs successfully.
In SPL, there's no such thing as a "variable".  We call them "fields".
And if you do | tstats count where index=<your_index> earliest=1 latest=+10y Anyway, that might call for support case.
We have logs that are written to /var/log /var/log/audit   We need to keep these for 365 days, and want to ensure that we are following best practices, is there a set of configuration settings w... See more...
We have logs that are written to /var/log /var/log/audit   We need to keep these for 365 days, and want to ensure that we are following best practices, is there a set of configuration settings we can follow to ensure we're following best practices? Ultimately, we want to ensure we have log retention, and that /var/log is not a cluttered mess.    Thank you!