All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The most typical reason for truncation when using syslog is when you're sending events over UDP and you're hitting datagram size limit. Do you receive syslog on your "syslog server" and write events... See more...
The most typical reason for truncation when using syslog is when you're sending events over UDP and you're hitting datagram size limit. Do you receive syslog on your "syslog server" and write events to file(s) from which you pick them up with an UF? If so, check the contents of the intermediate file(s). If the events are truncated there, it's a problem on the syslog side. If the events are OK there but are truncated after ingesting with UF, it's on UF's (or indexer's) side. If it's the syslog side, you can try to switch to TCP.
Can't help you here beyond advising again to check the docs. Haven't dealt with ePO for several years now. If by "logs aren't received in full" you mean that events are truncated, you're probably tr... See more...
Can't help you here beyond advising again to check the docs. Haven't dealt with ePO for several years now. If by "logs aren't received in full" you mean that events are truncated, you're probably trying to send them over UDP and then you are limited by the maximum UDP datagram length. Switch to TCP (again - as far as I remember, ePO requires TLS encryption over TCP so it might be a little more tricky to configure) and you're all set.
Hi @splunklearner , I suppose that you're using the standard add-on from Splunkbase (https://splunkbase.splunk.com/app/2846), otherwise, use it. Check if in the logs are truncated or divided in two... See more...
Hi @splunklearner , I suppose that you're using the standard add-on from Splunkbase (https://splunkbase.splunk.com/app/2846), otherwise, use it. Check if in the logs are truncated or divided in two events. If truncated, check the TRUNCATE option for that sourcetype. If divided, check if in the logs there's some date. Ciao. Giuseppe
Hi, F5 team is sending logs to our splunk syslog server as comma seperated values. Post onboarding we see some of field values (string values) are truncating.  Example: From F5: Violation_details=... See more...
Hi, F5 team is sending logs to our splunk syslog server as comma seperated values. Post onboarding we see some of field values (string values) are truncating.  Example: From F5: Violation_details=xxxxxxxxxxxxx(say 50 words) after on-boarding to Splunk: Violation_details=xxxxx (truncating) What might be the issue here? Syslog server -- UF -- Indexer (our flow)
Thank you for the information! Currently, I'm receiving logs from the ePO server via Syslog, but the logs aren’t being received in full. To improve this, I’m considering using the ePO API for mo... See more...
Thank you for the information! Currently, I'm receiving logs from the ePO server via Syslog, but the logs aren’t being received in full. To improve this, I’m considering using the ePO API for more reliable log collection. Could you guide me on how to configure log ingestion from the ePO server using its API instead of Syslog? I would appreciate details on: Steps for setting up ePO API integration with Splunk Any authentication requirements or best practices for secure data transfer Example scripts or configurations, if available Thank you in advance for any guidance!
Hi, I have a huge set of data with different emails in it , I want to setup email alerts for few parameters. But the issue is I'm unable to group the events on email and send a email alert with t... See more...
Hi, I have a huge set of data with different emails in it , I want to setup email alerts for few parameters. But the issue is I'm unable to group the events on email and send a email alert with the csv attachment of the results. Example:- abc@email has around 80 events in the table , I want to send only one alert to abc with all the 80 events in it as csv attachment. And there are around 85+ emails in my data , and they have to be grouped using only 1 spl and it should be used in alert. Note :- dont suggest $result.field$  or stats to group its not useful for me. Thank you
We are using Splunk forwarder v9.0.3. One of the X509 validation we would like to have against TLS server certificate coming from the Splunk Indexer is ExtendedKeyUsage(EKU) validation for Server aut... See more...
We are using Splunk forwarder v9.0.3. One of the X509 validation we would like to have against TLS server certificate coming from the Splunk Indexer is ExtendedKeyUsage(EKU) validation for Server authentication.  We generated the TLS server certificate without the ExtendedKeyUsage to test this use case. However, Splunk forwarder is still accepting the TLS server certificate. Ideally, it should allow only when ExtendedKeyUsage is set to Server authentication. Is this a known limitation or does it require a configuration change to perform this EKU validation? Please advise. Below is our outputs.conf contents.   [tcpout-server://host:port] clientCert = /<..>/clientCert.pem sslPassword = <..> sslRootCAPath = /<..>/ca.pem sslVerifyServerCert = true sslVerifyServerName = true  
If you will use "Global Account" from Component Library then you should be able to access Account data like this:   tenant_data = helper.get_arg('tenant')   Where 'tenant' is Global Account compo... See more...
If you will use "Global Account" from Component Library then you should be able to access Account data like this:   tenant_data = helper.get_arg('tenant')   Where 'tenant' is Global Account component name's   As result variable tenant_data will be initiated with dictionary with following keys: name, username and password for specific account so you will be able to use username and password keys e.g. for authentication
Settings -> Distributed Environment -> Distributed Search -> Search Peers -> Add New As I said before - for SHC you only need to add the CM, the indexers should populate automatically. The rest of t... See more...
Settings -> Distributed Environment -> Distributed Search -> Search Peers -> Add New As I said before - for SHC you only need to add the CM, the indexers should populate automatically. The rest of the components you need to add one by one. Then in the distributed monitoring console you'll have to set up roles for each of those components.
Nope. Can you provide me the guidelines to add it.
Hi ITWhisperer, downtime represents every value starting with 0,00 do matter how many decimals. BR
And did you add your components as search peers to your MC? (for the indexer cluster you only need to add the CM)
OK. But what is your issue here? You have a timestamp but don't know how to render it into text with a given format? For that you use either eval or fieldformat with a strftime function. Or you alr... See more...
OK. But what is your issue here? You have a timestamp but don't know how to render it into text with a given format? For that you use either eval or fieldformat with a strftime function. Or you already have a string value but have some problems with putting it on a dashboard? (what problems exactly?)
OK. This is indeed interesting. The search behind this panel uses the /services/server/status/partitions-space REST endpoint. This endpoint, according to docs, returns four values. - capacity - f... See more...
OK. This is indeed interesting. The search behind this panel uses the /services/server/status/partitions-space REST endpoint. This endpoint, according to docs, returns four values. - capacity - free - fs_type - mount_point (along with some "standard" fields like title, author, id and eai stuff) But the actual data returned by the call also includes a field called "available". And in my case the "available" field indeed shows the free space on the filesystem. The "free" field (again - in my case) contains some value completely unrelated to anything. But the search behind the MC panel uses the field "available" if it's included in the data. If it's not included, it uses the "free" field. Check the results of | rest splunk_server=<your indexer> /services/server/status/partitions-space | fields - eai* id author published updated title And see if the data makes sense.  I suspect you're not getting the "available" field and your "free" field contains some bonkers value. EDIT: Posted a feedback to the docs page describing this REST endpoint
There is no single good answer to this question. Generally, indexed fields cause additional overhead in terms of storage size, can - if bloated - counterintuitively have negative impact on performan... See more...
There is no single good answer to this question. Generally, indexed fields cause additional overhead in terms of storage size, can - if bloated - counterintuitively have negative impact on performance and for straight event searches do not give that much of a performance gain versus well written raw events search. Having said that, there are some scenarios when adding some indexed fields helps. One is when you do a lot of summarizing on some fields. Not searching but summarizing. Then indeed tstats is lightning fast compared to search | stats. (OTOH you can usually get similar results by report acceleration or summary indexing so indexed fields might not be needed). Another case is when you have a lot of values which can appear often in multiple fields. Splunk searches by finding values first and then parsing the event containing those values to find out if it parses out to given field. So if you have 10k events of which only 10 contain a string "whatever" and out of those ten nine are values of a field named comment, a search for "comment=whatever" will only need to check 10 events out of those 10k and of those 90% of considered events will match. So the search will be quite effective. But if your data contained the word "whatever" in 3k events of which only 9 were in the comment field, Splunk would have to fetch all 3k events, parse them and see if the comment field indeed contained that word. Since only 9 of those 3k events contain that word in that right spot, this search would be very ineffective. So there is no one size fits all. But the general rule is that adding indexed fields can sometimes help and it's not a thing that should never be used at all but should be only done when indeed needed. Not just added blindly for all possible fields in all your data because then you're effectively transforming Splunk into something it is not - a document database with schema on index. And for that you don't need Splunk. And if your SH is already overloaded, that usually (again - as always, it of course depends on particular case; yours might be completely different but I'm speaking from general experience) means that either you simply have too many concurrently running searches. And creating indexed fields won't help here much. Or you have badly written searches. (which is nothing to be ashamed of; Splunk is easy to start working with but can be tricky to master; writing effective searches requires quite a significant bit of knowledge).
Hi @hazem , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
My tests yesterday seemed to confirm it. I have a test index. I run | eventcount index=test2 | eval type="eventcount" | append [ | tstats count where earliest=1 latest=+10y index=test2 | ev... See more...
My tests yesterday seemed to confirm it. I have a test index. I run | eventcount index=test2 | eval type="eventcount" | append [ | tstats count where earliest=1 latest=+10y index=test2 | eval type="tstats"] And get count type 35172 eventcount 31077 tstats   (Yesterday I already removed some events) So I run index=test2 earliest=-2y@y latest=@y | delete Splunk says it deleted 27549 events. So I rerun my counting search and this time I get count type 35172 eventcount 3528 tstats   So you can see - deleting events changes tstats, doesn't touch eventcount
Hi @Ananya_23 , the only way is adding a JS that makes the same thing, but I cannot help you because I'm not so strong in JS development. A simpler way is to add an option: display _raw that adds t... See more...
Hi @Ananya_23 , the only way is adding a JS that makes the same thing, but I cannot help you because I'm not so strong in JS development. A simpler way is to add an option: display _raw that adds the _raw field to the table command and displays it. Ciao. Giuseppe
Hi @gcusello  _raw gives me all the details of that particular event agreed. But here I want to display the "Show Source" link to be displayed on the dashboard
Hi @krishna1 , you have only to remove the filter (where command). Eventually, you could add a calculation (eval command) to indicate if an event is matching or not, but probably isn't relevant bec... See more...
Hi @krishna1 , you have only to remove the filter (where command). Eventually, you could add a calculation (eval command) to indicate if an event is matching or not, but probably isn't relevant because the matching ones have a value in the work_queue field. Ciao. Giuseppe