All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Try something like this index=test | stats sum(score) as scoreSum by vuln | eventstats sum(scoreSum) as total | eval scoreSum_pct=100*scoreSum/total | fields - total | addcoltotals labelfield =vuln ... See more...
Try something like this index=test | stats sum(score) as scoreSum by vuln | eventstats sum(scoreSum) as total | eval scoreSum_pct=100*scoreSum/total | fields - total | addcoltotals labelfield =vuln label=Total_scoreSum scoreSum scoreSum_pct | eval scoreSum_pct = round(scoreSum_pct,1) . "%" | table vuln, scoreSum, scoreSum_pct
In your panel, define a table with a search query that finds the events with your specific text in.
| stats list(_raw) as _raw list(T) as T by ID | where T=="A"
I have a dashboard for my application. And in that dashboard, I have an empty panel created, to add the logs of that application when a certain exception occurs. So for that I have added a log.info o... See more...
I have a dashboard for my application. And in that dashboard, I have an empty panel created, to add the logs of that application when a certain exception occurs. So for that I have added a log.info object with some unique text in it. How do I configure the empty panel on the dashboard so that those specific logs containing unique text should be displayed in the panel for now on.
How to use addcoltotals to calculate percentage? For example:  my search below   scoreSum % is empty  Thank you for your help index=test | stats sum(score) as scoreSum by vuln | addcoltotals... See more...
How to use addcoltotals to calculate percentage? For example:  my search below   scoreSum % is empty  Thank you for your help index=test | stats sum(score) as scoreSum by vuln | addcoltotals labelfield =vuln   label=Total_scoreSum scoreSum | eval scoreSum_pct = scoreSum/Total_scoreSum*100 . "%" | table vuln, scoreSum, scoreSum_pct Result: vuln scoreSum scoreSum % vulnA 20   vulnB 40   vulnC 80   Total_scoreSum 140   Expected result vuln scoreSum scoreSum_pct vulnA 20 14.3% vulnB 40 28.6% vulnC 80 57.1% Total_scoreSum 140 100%
Completely ridiculous, at Nov-2023 on Splunk Enterprise v91.1.1 still an issue installing Python for Scientific Computing (for Windows 64-bit), even after change setting upload limit in web.conf as d... See more...
Completely ridiculous, at Nov-2023 on Splunk Enterprise v91.1.1 still an issue installing Python for Scientific Computing (for Windows 64-bit), even after change setting upload limit in web.conf as documented # Default is to allow files up to 500MB to be uploaded to the server # change this in local/web.conf if necessary # max_upload_size = 500. max_upload_size = 5000. There was an error processing the upload.Error during app install: failed to extract app from C:\Windows\TEMP\tmpp0s51vp5 to D:\Program Files\Splunk\var\run\splunk\bundle_tmp\f8e20425d3b382c7: The system cannot find the path specified. What do I miss here, how much must be this size be to , what units, or where are the brains of splunk. Splunk Enterprise after v9. 1.x is full of **bleep**. 
@ITWhisperer you miss main question and I tell you step by step main question! Would you please check main question? And tell me is there any way to do that? thanks
That's another solution but it's worth noting the difference in the search process of both those SPLs and the possible difference in performance.
| stats list(_raw) as _raw by ID
@ITWhisperer group by id
How do you determine which events are part of a "transaction"?
Confirm you see the HF's internal logs in Splunk Cloud (search for "index=_internal host=<<your HF name>>").  If you don't then the HF is not connecting to Splunk Cloud (did you install the Universal... See more...
Confirm you see the HF's internal logs in Splunk Cloud (search for "index=_internal host=<<your HF name>>").  If you don't then the HF is not connecting to Splunk Cloud (did you install the Universal Forwarder app on the HF?) and that should be fixed first.  If the HF's logs are in the cloud then use them to determine why otel data is not getting in.
@ITWhisperer  #txn1 16:30:53:002 moduleA ID[123] 16:30:54:002 moduleA ID[123] 16:30:55:002 moduleB ID[123]T[A] 16:30:56:002 moduleC ID[123] #txn2 16:30:57:002 moduleD ID[987] 16:30:58:002 m... See more...
@ITWhisperer  #txn1 16:30:53:002 moduleA ID[123] 16:30:54:002 moduleA ID[123] 16:30:55:002 moduleB ID[123]T[A] 16:30:56:002 moduleC ID[123] #txn2 16:30:57:002 moduleD ID[987] 16:30:58:002 moduleE ID[987]T[B] 16:30:59:002 moduleF ID[987] 16:30:60:002 moduleZ ID[987]  
I need advice on troubleshooting SplunkHecExporter.  I'm using an OpenTelemetry Collector to accept logs via OTLP, export them to an on-prem Splunk Heavy Forwarder, which them forwards them to Splunk... See more...
I need advice on troubleshooting SplunkHecExporter.  I'm using an OpenTelemetry Collector to accept logs via OTLP, export them to an on-prem Splunk Heavy Forwarder, which them forwards them to Splunk Cloud.  Below is my configuration.  I'm sending some test logs from Postman but the logs don't arrive in Splunk Cloud.  I see the arrival of the logs in the OpenTelemetry Collector through the debug exporter.  I confirmed connectivity to the Splunk Heavy Forwarder by setting an invalid token which results in an authentication error.  Using a valid token doesn't result in any debug logs being recorded.  Any suggestions on troubleshooting? exporters:   debug:     verbosity: normal   splunk_hec:     token: "<valid token>"     endpoint: "https://splunkheavyforwarder.mydomain.local:8088/services/collector/event"     source: "oteltest"     sourcetype: "oteltest"     index: "<valid index>"     tls:       ca_file: "/etc/otel/config/certs/ca_bundle.cer"     telemetry:       enabled: true     health_check_enabled: true     heartbeat:       interval: 10s service:   pipelines:     logs:       receivers: [otlp]       processors: []       exporters: [splunk_hec, debug]     telemetry:       logs:         level: "debug"
Hi @sarge338 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Another incredible answer!  These helped me a lot!
Incredible answer!
Note: botsv1 means absolutely nothing to most volunteers in this forum.  If there is something special about this dataset, you need to explain very clearly.  Also important: when you have a sample co... See more...
Note: botsv1 means absolutely nothing to most volunteers in this forum.  If there is something special about this dataset, you need to explain very clearly.  Also important: when you have a sample code that doesn't do what you wanted, you need to illustrate what it actually outputs, and explain why it doesn't meet your requirement if that's not painfully obvious.  Did your sample code give you desired result? Based on your sample code, I speculate that so-called URI is in the field src_ip?  Why do you use list, not values?  What is the use of list of count?  What's wrong with this simpler formula?   index=indexname |stats values(domain) as Domain count as total by src_ip | sort -total | head 10   Without SPL, can you explain/illustrate what data is like (anonymize as necessary), illustrate what the end result look like using illustrated data, and describe the logic between that data and your desired result?  This is the best way to get help with data analytics. i can speculate that you want to display individual count of domains by src_ip, too.  If so, designing a proper visual vocabulary is a lot better.  For example:   index=indexname |stats count by domain,src_ip | sort - count |stats list(count . " (" . domain . ")") as DomainCount, sum(count) as total by src_ip |sort - total DomainCount | head 10 |fields - total   Just note that this is mathematically equivalent to your code.  So, you will need to illustrate the output and explain why that's not the desired result.
index="index1" |lookup lookup ip_address as src_ip | where isnotnull(Cidr)
What do you mean by transaction?