All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

When including a results field in an alert, only the first result is included.  There is no way to include anything other than the first result.
So I got it to work but it is only after giving up completely on logic.  So when installing the app the message is clearly asking for the username and password only not an email and password.  To mak... See more...
So I got it to work but it is only after giving up completely on logic.  So when installing the app the message is clearly asking for the username and password only not an email and password.  To make my life easier I gave the same password to everything right so I wouldn't have to juggle passwords for a demo. So imagine my surprise when it kept saying the wrong username and password, so I finally said the heck with it and used my full email as my username and typed the password again, how about it worked and installed the app without any issues.  So my only conclusion is this has to be worded wrong and is in fact asking for your email and password from the webpage sign-in and not the admin username and password which you would assume it is.
We are using the below query for our alert,  when we receive mail we want to see MESSAGE in Alert title. In subject we give Splunk Alert: $name$. in this place when the alert is triggered we want to... See more...
We are using the below query for our alert,  when we receive mail we want to see MESSAGE in Alert title. In subject we give Splunk Alert: $name$. in this place when the alert is triggered we want to view that Messages in the alert title. We tried giving Splunk Alert: $result.Message$, here only 1 message is showing up not all. how can we do it??? Query: index=app-index "ERROR" |eval Message=case( like(_raw, "%internal error system%"), "internal error system", like(_raw, "%connection timeout error%"), "connection timeout error", like(_raw, "%connection error%"), "connection error", like(_raw, "%unsuccessfull application%"), "unsuccessfull application", like(_raw, "%error details app%"), "error details app", 1=1, null()) |stats count by Message |eval error=case( Message="internal error system" AND count >0,1, Message="connection timeout error" AND count >0,1, Message="connection error" AND count >0,1, Message="unsuccessfull application" AND count >0,1, Message="error details app" AND count >0,1) |search error=1  
The general answer is yes - you can filter out events. The way to do it specific to your need will depend on your precise use case. Within Splunk you can do it like this https://docs.splunk.com/Doc... See more...
The general answer is yes - you can filter out events. The way to do it specific to your need will depend on your precise use case. Within Splunk you can do it like this https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad If you can filter out in Azure so you simply don't send data to Splunk - even better. But this is out of scope of this forum and you have to ask some experienced Azure admins how to do so.
Ahh, right you are. Still there might be an issue with indexed extractions. Anyway the timestamp doesn't seem to be "evenly" offset so it's more like a current timestamp, not the one from the event. ... See more...
Ahh, right you are. Still there might be an issue with indexed extractions. Anyway the timestamp doesn't seem to be "evenly" offset so it's more like a current timestamp, not the one from the event. There might also be an issue with the prefix itself. We don't see the raw data so there might be any number of whitespace characters there. (And escaping quotes is not needed; but shouldn't be harmful in this case)
Basically I am trying to find a way to prevent data from certain hostnames to even get ingested into Splunk (cost cutting measure for one thing). 
We have a requirement to forward different data to multiple Splunk instances. In this case, security data is forwarded to EntServer2, where app data if forwarded to EntServer1. What is best practice ... See more...
We have a requirement to forward different data to multiple Splunk instances. In this case, security data is forwarded to EntServer2, where app data if forwarded to EntServer1. What is best practice regarding Universal Forwarders? Set up two universal forwarders on same app server, or, set up and configure a single UF to forward to both Ent Servers?
If you just want to filter on "*172.21.255.8*" then why do you have all that extra stuff in the regex? Try this simpler version REGEX = ,172\.21\.255\.8:\d+,
Data rolled to frozen directory is coming as inflight data and it showing size of it as 0. There are few details about inflight-db as too why they happen basically Splunk says that they are created ... See more...
Data rolled to frozen directory is coming as inflight data and it showing size of it as 0. There are few details about inflight-db as too why they happen basically Splunk says that they are created when Splunk is writing from warm to cold but not much more than that.   So lets say that Splunk is writing buckets and its like 100 GB worth of buckets if lets say you had 3 indexers with buckets that were 3 months old and you forced all your buckets from these 3 indexers to move to cold. As long as they had write access to your storage should there even be inflight dbs? or is that too much to write at once and Splunk is like nah I dont think so. And therefore writes some data but the rest it just make some error log and calls it a day.   So is there a limit as to how much can be writen to cold at one time? If it is writing and that write gets interupted then why does it see that and just resume where it left off to complete the transfer? I know there are logs  but seems to me it should be like watching a movie with internet I should be able to pause and then resume when I'm ready or better yet if  100 buckets start writing and some technical issue happens at 1/4 or halfway then that bucket writing should either cancel full stop and tell me in plain language or pause and resume when connection is back up.
Default System Timezone is selected in my preferences. I don't think is the problem because my other searches are working fine.
planning to deploy a splunk architecture with one SH and indexer cluster with 100GB/day data ingestion. Are there any recommendation documentations for OS partitions (paths with size), mount points a... See more...
planning to deploy a splunk architecture with one SH and indexer cluster with 100GB/day data ingestion. Are there any recommendation documentations for OS partitions (paths with size), mount points and RAID configuration for linux servers.
I am new to regex. I want to just extract Catalog-Import from the below query.. can anyone help how i can do this?  [2024-08-22 12:55:56.439 GMT] ERROR CustomJobThread|1154761894|Catalog-Import|Get... See more...
I am new to regex. I want to just extract Catalog-Import from the below query.. can anyone help how i can do this?  [2024-08-22 12:55:56.439 GMT] ERROR CustomJobThread|1154761894|Catalog-Import|GetNavigationCatalogFromSFTP com.demandware.api.net.SFTPClient Sites-ks_jp_rt-Site JOB faadaf233c 09beff21183cec83f264904132 5766054387038857216 - SFTP connect operation failed
Hello everyone, I'm trying to filter out some logs in the IA-WindowsSecurity Application. The indexed values are when: - The EventCode=4634 AND the Security_ID="*$" I created an app deployed ... See more...
Hello everyone, I'm trying to filter out some logs in the IA-WindowsSecurity Application. The indexed values are when: - The EventCode=4634 AND the Security_ID="*$" I created an app deployed on an index with the following props and transforms config: Props.conf [WinEventLog] TRANSFORMS-remove_computer_logoff = remove_logoff Transforms.conf [remove_logoff] REGEX = DEST_KEY = queue FORMAT = nullQueue I made the following regex for matching the event: - EventCode=4634 - Security_ID=".*\$$" I'm not sure how to correctly "put together" these two REGEXES. I did a lot of testing with different types of regexes (in PCRE Format), but I wasn't able to make it work.   Can someone please help me? Thanks in advance
Or | stats list(SourceType) as "Source-Type", list(GB) as GB by Index | appendpipe [| stats sum(GB) as GB | eval Index="Overall Total"]
Try switching the last two lines | addtotals col=t row=f labelfield=Index label="Overall Total" | stats list(SourceType) as "Source-Type", list(GB) as GB by Index
index=_internal source=/opt/splunk/var/log/splunk/license_usage.log type=Usage | stats sum(b) as bytes by st , idx | eval GB=round(bytes/(1024*1024*1024),6) | table st, idx, GB | sort -GB | eventstat... See more...
index=_internal source=/opt/splunk/var/log/splunk/license_usage.log type=Usage | stats sum(b) as bytes by st , idx | eval GB=round(bytes/(1024*1024*1024),6) | table st, idx, GB | sort -GB | eventstats sum(GB) as total | eval Percentage=round((GB/total)*100,6) | rename st as SourceType | rename idx as Index | stats list(SourceType) as "Source-Type", list(GB) as GB by Index | addtotals col=t row=f labelfield=Index label="Overall Total"
What search did you use to get this table?
Please find teh below sample values Index Source-Type GB aws_vpcflow aws:vpcflow 10 aws:cloudwatchlogs:vpcflow 20 windows windows:fluentd 30 windows 40 WinEventLog:Security... See more...
Please find teh below sample values Index Source-Type GB aws_vpcflow aws:vpcflow 10 aws:cloudwatchlogs:vpcflow 20 windows windows:fluentd 30 windows 40 WinEventLog:Security 50 cloud  cloud_watch 60 aws_cloud 70
The settings should be fine.  MAX_TIMESTAMP_LOOKAHEAD starts after TIME_PREFIX ends.
Please share your actual events (anonymised appropriately) in a codeblock