All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Team, Below is my query: index= "abc*" sourcetype=600000304_gg_abs_ipc1 OR sourcetype=600000304_gg_abs_ipc2 "Message successfully sent to Cornerstone" source!="/var/log/messages" I am getting r... See more...
Hi Team, Below is my query: index= "abc*" sourcetype=600000304_gg_abs_ipc1 OR sourcetype=600000304_gg_abs_ipc2 "Message successfully sent to Cornerstone" source!="/var/log/messages" I am getting result of " sourcetype=600000304_gg_abs_ipc1  I am not getting result of 600000304_gg_abs_ipc2 I need result of both sourcetype in one frame. Can someone help
There are three different events "input param" ,"sqs sent count", "total message published to SQS successfully" with the first event "input param"- I am trying to fetch different entity say mate... See more...
There are three different events "input param" ,"sqs sent count", "total message published to SQS successfully" with the first event "input param"- I am trying to fetch different entity say material/supplied material with the 2nd event "sqs sent count"-getting the total sqs sent count for that particular material or supplied material with the 3rd event "total message published to SQS successfully"-getting the count of total message published. Now i want then to publish in a single row for all those count displayed in table for a single objectype to get in one dashboard panel Then do a total counts for each columns and displays as a single row which will display in other panel of dashboard
@ITWhisperer I have a list of environments in a drop-down, so whenever I select a different environment, I should get the logs of that environment in that panel. How do I configure that Right now my... See more...
@ITWhisperer I have a list of environments in a drop-down, so whenever I select a different environment, I should get the logs of that environment in that panel. How do I configure that Right now my configuration is as follows:   env=dev `app_logs(application_name)` "my unique text"    
thank you 
Start with the search app - what search do you use to find the events you are interested in - for example index=<your index> "string you want to find"
Hello @ITWhisperer I'm completely new to splunk, could you please be more specific about the query that I need to use?
Try something like this index=test | stats sum(score) as scoreSum by vuln | eventstats sum(scoreSum) as total | eval scoreSum_pct=100*scoreSum/total | fields - total | addcoltotals labelfield =vuln ... See more...
Try something like this index=test | stats sum(score) as scoreSum by vuln | eventstats sum(scoreSum) as total | eval scoreSum_pct=100*scoreSum/total | fields - total | addcoltotals labelfield =vuln label=Total_scoreSum scoreSum scoreSum_pct | eval scoreSum_pct = round(scoreSum_pct,1) . "%" | table vuln, scoreSum, scoreSum_pct
In your panel, define a table with a search query that finds the events with your specific text in.
| stats list(_raw) as _raw list(T) as T by ID | where T=="A"
I have a dashboard for my application. And in that dashboard, I have an empty panel created, to add the logs of that application when a certain exception occurs. So for that I have added a log.info o... See more...
I have a dashboard for my application. And in that dashboard, I have an empty panel created, to add the logs of that application when a certain exception occurs. So for that I have added a log.info object with some unique text in it. How do I configure the empty panel on the dashboard so that those specific logs containing unique text should be displayed in the panel for now on.
How to use addcoltotals to calculate percentage? For example:  my search below   scoreSum % is empty  Thank you for your help index=test | stats sum(score) as scoreSum by vuln | addcoltotals... See more...
How to use addcoltotals to calculate percentage? For example:  my search below   scoreSum % is empty  Thank you for your help index=test | stats sum(score) as scoreSum by vuln | addcoltotals labelfield =vuln   label=Total_scoreSum scoreSum | eval scoreSum_pct = scoreSum/Total_scoreSum*100 . "%" | table vuln, scoreSum, scoreSum_pct Result: vuln scoreSum scoreSum % vulnA 20   vulnB 40   vulnC 80   Total_scoreSum 140   Expected result vuln scoreSum scoreSum_pct vulnA 20 14.3% vulnB 40 28.6% vulnC 80 57.1% Total_scoreSum 140 100%
Completely ridiculous, at Nov-2023 on Splunk Enterprise v91.1.1 still an issue installing Python for Scientific Computing (for Windows 64-bit), even after change setting upload limit in web.conf as d... See more...
Completely ridiculous, at Nov-2023 on Splunk Enterprise v91.1.1 still an issue installing Python for Scientific Computing (for Windows 64-bit), even after change setting upload limit in web.conf as documented # Default is to allow files up to 500MB to be uploaded to the server # change this in local/web.conf if necessary # max_upload_size = 500. max_upload_size = 5000. There was an error processing the upload.Error during app install: failed to extract app from C:\Windows\TEMP\tmpp0s51vp5 to D:\Program Files\Splunk\var\run\splunk\bundle_tmp\f8e20425d3b382c7: The system cannot find the path specified. What do I miss here, how much must be this size be to , what units, or where are the brains of splunk. Splunk Enterprise after v9. 1.x is full of **bleep**. 
@ITWhisperer you miss main question and I tell you step by step main question! Would you please check main question? And tell me is there any way to do that? thanks
That's another solution but it's worth noting the difference in the search process of both those SPLs and the possible difference in performance.
| stats list(_raw) as _raw by ID
@ITWhisperer group by id
How do you determine which events are part of a "transaction"?
Confirm you see the HF's internal logs in Splunk Cloud (search for "index=_internal host=<<your HF name>>").  If you don't then the HF is not connecting to Splunk Cloud (did you install the Universal... See more...
Confirm you see the HF's internal logs in Splunk Cloud (search for "index=_internal host=<<your HF name>>").  If you don't then the HF is not connecting to Splunk Cloud (did you install the Universal Forwarder app on the HF?) and that should be fixed first.  If the HF's logs are in the cloud then use them to determine why otel data is not getting in.
@ITWhisperer  #txn1 16:30:53:002 moduleA ID[123] 16:30:54:002 moduleA ID[123] 16:30:55:002 moduleB ID[123]T[A] 16:30:56:002 moduleC ID[123] #txn2 16:30:57:002 moduleD ID[987] 16:30:58:002 m... See more...
@ITWhisperer  #txn1 16:30:53:002 moduleA ID[123] 16:30:54:002 moduleA ID[123] 16:30:55:002 moduleB ID[123]T[A] 16:30:56:002 moduleC ID[123] #txn2 16:30:57:002 moduleD ID[987] 16:30:58:002 moduleE ID[987]T[B] 16:30:59:002 moduleF ID[987] 16:30:60:002 moduleZ ID[987]  
I need advice on troubleshooting SplunkHecExporter.  I'm using an OpenTelemetry Collector to accept logs via OTLP, export them to an on-prem Splunk Heavy Forwarder, which them forwards them to Splunk... See more...
I need advice on troubleshooting SplunkHecExporter.  I'm using an OpenTelemetry Collector to accept logs via OTLP, export them to an on-prem Splunk Heavy Forwarder, which them forwards them to Splunk Cloud.  Below is my configuration.  I'm sending some test logs from Postman but the logs don't arrive in Splunk Cloud.  I see the arrival of the logs in the OpenTelemetry Collector through the debug exporter.  I confirmed connectivity to the Splunk Heavy Forwarder by setting an invalid token which results in an authentication error.  Using a valid token doesn't result in any debug logs being recorded.  Any suggestions on troubleshooting? exporters:   debug:     verbosity: normal   splunk_hec:     token: "<valid token>"     endpoint: "https://splunkheavyforwarder.mydomain.local:8088/services/collector/event"     source: "oteltest"     sourcetype: "oteltest"     index: "<valid index>"     tls:       ca_file: "/etc/otel/config/certs/ca_bundle.cer"     telemetry:       enabled: true     health_check_enabled: true     heartbeat:       interval: 10s service:   pipelines:     logs:       receivers: [otlp]       processors: []       exporters: [splunk_hec, debug]     telemetry:       logs:         level: "debug"