All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a requirement to create a dashboard with following Json data: all_request_headers: { [-]      Accept: */*      Content-Length: 0      Content-Type: text/plain      Cookie: Cookie1=Salvin ... See more...
I have a requirement to create a dashboard with following Json data: all_request_headers: { [-]      Accept: */*      Content-Length: 0      Content-Type: text/plain      Cookie: Cookie1=Salvin      Host: wasphictst-wdc.hc.cloud.uk.sony      User-Agent: insomnia/2021.5.3    }    all_response_headers: { [-]      Connection: keep-alive      Content-Length: 196      Content-Type: text/html; charset=iso-8859-1      Date: Fri, 14 Feb 2025 15:51:13 GMT      Server: Apache/2.4.37 (Red Hat Enterprise Linux)      Strict-Transport-Security: max-age=31536000; includeSubDomains    } waf_log: { [-]      allowlist_configured: false      allowlist_processed: false      application_rules_configured: false      application_rules_processed: false      latency_request_body_phase: 1544      latency_request_header_phase: 351      latency_response_body_phase: 15      latency_response_header_phase: 50      memory_allocated: 71496      omitted_app_rule_stats: { [+]      }      omitted_signature_stats: { [+]      }      psm_configured: false      psm_processed: false      rules_configured: true      rules_processed: true      status: PASSED    } Fields are getting auto extracted like waf_log.allowlist_configured ... etc. They want a neat dashboard for request headers, response headers, waf log details etc. How to create this dashboard. I am confused. If we create based on fields then there will be so many panels right.
Is the data being sent from the origin to both syslog servers at the same time? -- Yes, both syslog servers picking same log and ingesting at the same time. Is it possible to control this behaviou... See more...
Is the data being sent from the origin to both syslog servers at the same time? -- Yes, both syslog servers picking same log and ingesting at the same time. Is it possible to control this behaviour so it sends only to the primary, or to the standby if it fails? ---  How to achieve this? 
Hi @splunklearner  It sounds like your duplication is coming before it hits Splunk - Its not easy to deduplicate this on the way through, instead you might want to look at how the data is sent to sy... See more...
Hi @splunklearner  It sounds like your duplication is coming before it hits Splunk - Its not easy to deduplicate this on the way through, instead you might want to look at how the data is sent to syslog.  Is the data being sent from the origin to both syslog servers at the same time? Is it possible to control this behaviour so it sends only to the primary, or to the standby if it fails? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @dinesh001kumar  You can add files into an app and then package this to be uploaded to Splunk Cloud - it isnt possible to upload CSS/HTML via the UI in Splunk Cloud. Create an app and add the re... See more...
Hi @dinesh001kumar  You can add files into an app and then package this to be uploaded to Splunk Cloud - it isnt possible to upload CSS/HTML via the UI in Splunk Cloud. Create an app and add the required files to <APP NAME>/appserver/static/ so that they are then accessible within the app. Have a look at https://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/UseCSS#Customize_styling_and_behavior_for_one_dashboard for more info on using CSS too. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
We have two syslog standalone servers (both are active) (one named as primary and the other is contingency) with UF installed in it forwards data to Splunk. We have different indexes configured for t... See more...
We have two syslog standalone servers (both are active) (one named as primary and the other is contingency) with UF installed in it forwards data to Splunk. We have different indexes configured for these two servers.  Now the issue is same log is getting indexed into both servers which resulting in duplication of logs in Splunk.  Syslog 1 --- index = sony_a == Same log Syslog 2 --- index = sony_b == Same log When we search with index=sony* it is giving same logs for two indexes which is duplication. how to avoid two syslog servers from getting indexed same log twice? 
Is Job unique for each start/end?  If so I would suggest something like this: index=music Job=* | stats earliest(_time) as start_time, latest(_time) as end_time by Job | eval Duration=(end_time-sta... See more...
Is Job unique for each start/end?  If so I would suggest something like this: index=music Job=* | stats earliest(_time) as start_time, latest(_time) as end_time by Job | eval Duration=(end_time-start_time) ``` The rest of your SPL here, such as ``` | chart values(Duration) as Duration by start_time   Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hello, Hello, we are on ES 7.3.2. We are noticing there is difference in count of Notable alerts visible under "Incident Review" page versus to the number of events in the notable index for that sam... See more...
Hello, Hello, we are on ES 7.3.2. We are noticing there is difference in count of Notable alerts visible under "Incident Review" page versus to the number of events in the notable index for that same time period. For example, Our Incident Review page when filtered to show all notables for previous month' time range shows 4648 notable alerts generated. Screenshot attached. But, if check index=notable for previous months' time range, it shows 4653 events. Likewise, we are seeing this difference for every month. Ideally both numbers should match. How to find out what is causing this mismatch and what is the reason exactly?
Hi Splunkers, I'm testing with 2 separated splunk deployments, 1 is provider and 1 is local. I want to put lookup file/definition or a kvstore on the local, when make a search to the provider (via ... See more...
Hi Splunkers, I'm testing with 2 separated splunk deployments, 1 is provider and 1 is local. I want to put lookup file/definition or a kvstore on the local, when make a search to the provider (via standard mode or transparent) then `lookup` in the local LookupDefinition.  How can I do it? Or please some one explain me this context. I looked around about lookup command, federated.conf, transform.conf, distsearch.conf... My search like: ``` <base search> | fields srcip, dstip | lookup local=true serversList ip as srcip OUTPUTNEW serverName ```
I'm able to calculate the time difference between the start and end time of my job. I want to display the string value in bar chart how to achieve this. index=music Job=* | eval Duration=(end-start... See more...
I'm able to calculate the time difference between the start and end time of my job. I want to display the string value in bar chart how to achieve this. index=music Job=* | eval Duration=(end-start_time) | chart values(Duration) as Duration by "Start Time"
@kiran_panchavat , thanks but it's still not clear to me. Do you mean this sentence in the solution you gave ? "Alerts are triggered if the specified search yields a non-empty search result list." ... See more...
@kiran_panchavat , thanks but it's still not clear to me. Do you mean this sentence in the solution you gave ? "Alerts are triggered if the specified search yields a non-empty search result list."   It still looks like a bug to me or at least it's  very unclear.
I need to upload CSS and HTML file on Splunk Cloud, Please help me with the steps to upload and use it in Dashboard customise. Since there is no option for upload asset in Splunk Cloud.
Hi @gcusello - Will this work if we give some more values to be considered for indexing in transforms.conf? [setparsing] REGEX = systemd | auditd | CROND DEST_KEY = queue FORMAT = indexQueue [se... See more...
Hi @gcusello - Will this work if we give some more values to be considered for indexing in transforms.conf? [setparsing] REGEX = systemd | auditd | CROND DEST_KEY = queue FORMAT = indexQueue [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue
At times, LKUP_DSN will match exactly with DS_NAME. In other instances, LKUP_DSN will contain all the characters of DS_NAME except for the last nine characters.
Hello Nagarjuna, Thanks for posting your questions to our community. > How is the compatibility between OTel collector and AppDynamics, is it efficient and recommendable? It depends on the ag... See more...
Hello Nagarjuna, Thanks for posting your questions to our community. > How is the compatibility between OTel collector and AppDynamics, is it efficient and recommendable? It depends on the agent type because not all languages are supported. Please check the doc(Backend Languages): https://docs.appdynamics.com/appd/24.x/latest/en/application-monitoring/splunk-appdynamics-for-opentelemetry/support-for-cisco-appdynamics-for-opentelemetry > If we use Otel collector for data exporting to AppD..is it still required to use AppD agents also? Not necessary. However, if you have an application that is monitored with AppDynamics Java, .NET, or Node.js Agents, you can instrument AppDynamics agents in your application to report both OpenTelemetry span data and AppDynamics SaaS data. Please check the doc: https://docs.appdynamics.com/appd/24.x/latest/en/application-monitoring/splunk-appdynamics-for-opentelemetry/instrument-applications-with-splunk-appdynamics-for-opentelemetry > How the Licensing will work if we use Otel for data exporting to AppD? Please refer to the doc: https://docs.appdynamics.com/appd/24.x/latest/en/splunk-appdynamics-licensing/license-entitlements-and-restrictions#id-.LicenseEntitlementsandRestrictionsv24.9-AppDynamicsforOpenTelemetry%E2%84%A2 > Otel collector is compatible with both on premise and SaaS Environment of AppD?. Only SaaS controller supports OpenTelemetry Please check the doc(Before your Begin): https://docs.appdynamics.com/appd/24.x/latest/en/application-monitoring/splunk-appdynamics-for-opentelemetry/view-opentelemetry-data-in-the-controller-ui#id-.ViewOpenTelemetryDataintheControllerUIv24.7-BeforeYouBegin Hope these information helps. Xiangning
Please show what your mailto link looks like in your XML 
The mvfunctions generally take an MV field as an input and then perform an operation on each of the values of the MV, so the solution | eval filtered=mvfilter(mvfield!="N/A") is saying - for each ... See more...
The mvfunctions generally take an MV field as an input and then perform an operation on each of the values of the MV, so the solution | eval filtered=mvfilter(mvfield!="N/A") is saying - for each value of the MV field called mvfield match each one against the string "N/A" and if it does NOT match (!="N/A") then return that value to the new field filtered, appending each non-matching value to that new field. That new field will then contain all the values of the original mvfield that did not match the string. The eval is then finally putting back the "N/A" string to the filtered field so that if ALL values of the original field contained N/A then the new field will have a single N/A value. If you wanted ALL the N/A instances to be present, then replace the mvfilter line with | eval filtered=coalesce(mvfilter(mvfield!="N/A"), mvfield) which if you have N/A 3 times in your original, you will have N/A 3 times in your final result.  
mvfilter() is indeed the way to go but you need to do a bit more bending over backwards to get it only when you need it. A run-anywhere example: | makeresults format=csv data="row 1;2;1;3 1;2;3;4... See more...
mvfilter() is indeed the way to go but you need to do a bit more bending over backwards to get it only when you need it. A run-anywhere example: | makeresults format=csv data="row 1;2;1;3 1;2;3;4 1;1;1;1 4;3;2;4;5 1;1;2;3 1;4;3;2 3;4;5;2 5;5;5 1;1" | eval split=split(row,";") ``` This creates a set of example data``` | eval totalcount=mvcount(split) ``` This calculates how many elemets we have``` | eval onecount=mvcount(mvfilter(if(split="1",true(),false()))) ``` This count how many ones we have``` | eval filtered=if(onecount>0 AND onecount<totalcount,mvfilter(if(split="1",false(),true())),split) ``` And this filters the ones but only if there was at least one (that's generally not needed) and there is less ones than all values ```
@dsky55  Splunk Enterprise 9.x officially supports Python 3.7+, but some apps and add-ons may still include Python 2.x code. Even if your Splunk installation has Python 3.7.17, the Upgrade Readiness... See more...
@dsky55  Splunk Enterprise 9.x officially supports Python 3.7+, but some apps and add-ons may still include Python 2.x code. Even if your Splunk installation has Python 3.7.17, the Upgrade Readiness App scans app files for deprecated Python 2.x code. Please check this below documentation for more information.  https://docs.splunk.com/Documentation/Splunk/9.4.0/UpgradeReadiness/ResultsPython 
@AJH2000  Did you try checking with Postman or cURL?"
@michael_vi  rex mode=sed "s/\*{10,}[\s\S]*?\*{10,}\n//g" → Removes everything between (and including) **************************************. You can apply the configurations in props.conf and... See more...
@michael_vi  rex mode=sed "s/\*{10,}[\s\S]*?\*{10,}\n//g" → Removes everything between (and including) **************************************. You can apply the configurations in props.conf and transforms.conf props.conf [YOUR_SOURCETYPE] TRANSFORMS-remove_header = remove_header_content transforms.conf  [remove_header_content] REGEX = \*{10,}[\s\S]*?\*{10,}\n FORMAT = DEST_KEY = _raw