All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello again, my last entry "i've parsed my InputFile (json-parser) and before one of the missing event there is an error, like unexpected non-white-space sign. So i think, it is not a problem of s... See more...
Hello again, my last entry "i've parsed my InputFile (json-parser) and before one of the missing event there is an error, like unexpected non-white-space sign. So i think, it is not a problem of splunk! " was a wrong result. I've made a mistake in my investigation. So i tried the programm jq (ubuntu-linux) to validate the whole json-file. Surprise - there is no failure in the json-file. I've checked the json-file in the forwarder-directory. So i guess there is a sign in the data,  that splunk "misunderstand" and break the json-structure.
What do you mean by "it's not working"? It's supposed to work on contents of a given field. This field must be extracted before you use the rex command. Is it extracted?
I am looking to record a measurement which is taken after the transition from Home state to Diagnostic State, I am calling the state change the keystone event the raw keystone event looks like so: ... See more...
I am looking to record a measurement which is taken after the transition from Home state to Diagnostic State, I am calling the state change the keystone event the raw keystone event looks like so: {"bootcount":26,"device_id":"X","environment":"prod_walker","event_source":"appliance","event_type":"GENERIC", "location":{"city":"X","country":"X","latitude":X,"longitude":X,"state":"X"},"log_level":"info", "message":"client: GCL internal state { new_state: Diagnostic, old_state: Home, conditions: 65600, error_code: 0}", "model_number":"X1","sequence":274,"serial":"123X","software_version":"2.3.1.7682","ticks":26391,"timestamp":1723254756}  my search to find the keystone event looks like: index="june_analytics_logs_prod" message=* new_state: Diagnostic, old_state: Home* NOT message=*counts*|  After the keystone event, I would like to take the measurements found in the immediate next 5 events, i will call these the data events. the raw data events look like: {"bootcount":26,"device_id":"x","environment":"prod_walker","event_source":"appliance","event_type":"GENERIC", "location":{"city":"X","country":"X","latitude":X,"longitude":X,"state":"X"},"log_level":"info", "message":"client: fan: 2697, auger: 1275, glow_v: 782, glow: false, fuel: 0, cavity_temp: 209", "model_number":"X1","sequence":280,"serial":"123X","software_version":"2.3.1.7682","ticks":26902,"timestamp":1723254761} I would like to take the first 5 data events directly after the keystone event, extract the glow_v value and take the median of these 5 values as the accepted value.   In short, want to build a query to find the time of a keystone event, use this time to find the immediately proceeding data events that match certain criteria, extract the glow_v value from these data events and then take the median of these glow_v values
Hi @PickleRick, our requirement is to set up alert on this logs and we need to trigger an alert if any failures are there greater than 0 I tied the rex u provided it’s not working, as u suggested m... See more...
Hi @PickleRick, our requirement is to set up alert on this logs and we need to trigger an alert if any failures are there greater than 0 I tied the rex u provided it’s not working, as u suggested may I know how can we do via spath  
Sorry, I should have been more clear. I do see the files (/var/log/cron and /var/log/audit/audit.log) I am troubleshooting when I run "splunk list monitor" and then it matches when I run "splunk lis... See more...
Sorry, I should have been more clear. I do see the files (/var/log/cron and /var/log/audit/audit.log) I am troubleshooting when I run "splunk list monitor" and then it matches when I run "splunk list inputstatus" The "inputstatus" command shows: /var/log/share file position=xxxxxx size=<same-as-above> percent=100 type=finished reading the file /var/log/audit/audit.log  file position=xxxxxx size=<same-as-above> percent=100 type=open file
No. It could be complicated to install two UF instances on one host. Especially on Windows. If you're configuring tcpout outputs, you can just set up two output groups and send to both.
Hi Splunkers, I'm trying to get diskusage for searches running by user. | rest /services/search/jobs | rex field=eventSearch "index\s*=(?<index>[^,\s)]+)" | search index=$ind$ | eval size_MB = d... See more...
Hi Splunkers, I'm trying to get diskusage for searches running by user. | rest /services/search/jobs | rex field=eventSearch "index\s*=(?<index>[^,\s)]+)" | search index=$ind$ | eval size_MB = diskUsage/1024/1024 | stats sum(size_MB) as size_MB by author | rename author as user Is there a way to get diskusage for historical log's like for a month or more. ?
Thanks for reponse. Ill get into tomorrow. More info. Its all the one source in splunk (1 x syslog spanning 30 days) My search = "ACCESS BLOCK" My results are many rows of = XXXXXXXXXXX XXXXXXXX... See more...
Thanks for reponse. Ill get into tomorrow. More info. Its all the one source in splunk (1 x syslog spanning 30 days) My search = "ACCESS BLOCK" My results are many rows of = XXXXXXXXXXX XXXXXXXXXXX XXXXXXXXXXX Local1.Warning 172.30.31.4 Aug 12 23:16:09 2024 CXXXXXXXXXXX0 src="45.148.10.81:18837" dst="XXXXXXXXXXX:443" msg="surfshark.com:Anonymizers, SSI:N" note="ACCESS BLOCK" user="unknown" devID="XXXXXXXXXXX" cat="URL Threat Filter" host = XXXXXXXXXXX.splunkcloud.comsource = Syslog-CatchAll2024-08-12.txtsourcetype = 1-Zyxel XXXXXXXXXXX XXXXXXXXXXX XXXXXXXXXXX Local1.Warning 172.30.31.4 Aug 12 23:16:09 2024 CXXXXXXXXXXX0 src="45.148.10.87:6139" dst="XXXXXXXXXXX:443" msg="surfshark.com:Anonymizers, SSI:N" note="ACCESS BLOCK" user="unknown" devID="XXXXXXXXXXX" cat="URL Threat Filter" host = XXXXXXXXXXX.splunkcloud.comsource = Syslog-CatchAll2024-08-12.txtsourcetype = 1-Zyxel I then want to seach again but remove every line that has src="45.148.10.81:18837" OR src="45.148.10.87:6139" OR (the next) OR (the next) OR (and so on for 3000+ IP addresses) Thus giving me a data set of "known good traffic"    
When including a results field in an alert, only the first result is included.  There is no way to include anything other than the first result.
So I got it to work but it is only after giving up completely on logic.  So when installing the app the message is clearly asking for the username and password only not an email and password.  To mak... See more...
So I got it to work but it is only after giving up completely on logic.  So when installing the app the message is clearly asking for the username and password only not an email and password.  To make my life easier I gave the same password to everything right so I wouldn't have to juggle passwords for a demo. So imagine my surprise when it kept saying the wrong username and password, so I finally said the heck with it and used my full email as my username and typed the password again, how about it worked and installed the app without any issues.  So my only conclusion is this has to be worded wrong and is in fact asking for your email and password from the webpage sign-in and not the admin username and password which you would assume it is.
We are using the below query for our alert,  when we receive mail we want to see MESSAGE in Alert title. In subject we give Splunk Alert: $name$. in this place when the alert is triggered we want to... See more...
We are using the below query for our alert,  when we receive mail we want to see MESSAGE in Alert title. In subject we give Splunk Alert: $name$. in this place when the alert is triggered we want to view that Messages in the alert title. We tried giving Splunk Alert: $result.Message$, here only 1 message is showing up not all. how can we do it??? Query: index=app-index "ERROR" |eval Message=case( like(_raw, "%internal error system%"), "internal error system", like(_raw, "%connection timeout error%"), "connection timeout error", like(_raw, "%connection error%"), "connection error", like(_raw, "%unsuccessfull application%"), "unsuccessfull application", like(_raw, "%error details app%"), "error details app", 1=1, null()) |stats count by Message |eval error=case( Message="internal error system" AND count >0,1, Message="connection timeout error" AND count >0,1, Message="connection error" AND count >0,1, Message="unsuccessfull application" AND count >0,1, Message="error details app" AND count >0,1) |search error=1  
The general answer is yes - you can filter out events. The way to do it specific to your need will depend on your precise use case. Within Splunk you can do it like this https://docs.splunk.com/Doc... See more...
The general answer is yes - you can filter out events. The way to do it specific to your need will depend on your precise use case. Within Splunk you can do it like this https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad If you can filter out in Azure so you simply don't send data to Splunk - even better. But this is out of scope of this forum and you have to ask some experienced Azure admins how to do so.
Ahh, right you are. Still there might be an issue with indexed extractions. Anyway the timestamp doesn't seem to be "evenly" offset so it's more like a current timestamp, not the one from the event. ... See more...
Ahh, right you are. Still there might be an issue with indexed extractions. Anyway the timestamp doesn't seem to be "evenly" offset so it's more like a current timestamp, not the one from the event. There might also be an issue with the prefix itself. We don't see the raw data so there might be any number of whitespace characters there. (And escaping quotes is not needed; but shouldn't be harmful in this case)
Basically I am trying to find a way to prevent data from certain hostnames to even get ingested into Splunk (cost cutting measure for one thing). 
We have a requirement to forward different data to multiple Splunk instances. In this case, security data is forwarded to EntServer2, where app data if forwarded to EntServer1. What is best practice ... See more...
We have a requirement to forward different data to multiple Splunk instances. In this case, security data is forwarded to EntServer2, where app data if forwarded to EntServer1. What is best practice regarding Universal Forwarders? Set up two universal forwarders on same app server, or, set up and configure a single UF to forward to both Ent Servers?
If you just want to filter on "*172.21.255.8*" then why do you have all that extra stuff in the regex? Try this simpler version REGEX = ,172\.21\.255\.8:\d+,
Data rolled to frozen directory is coming as inflight data and it showing size of it as 0. There are few details about inflight-db as too why they happen basically Splunk says that they are created ... See more...
Data rolled to frozen directory is coming as inflight data and it showing size of it as 0. There are few details about inflight-db as too why they happen basically Splunk says that they are created when Splunk is writing from warm to cold but not much more than that.   So lets say that Splunk is writing buckets and its like 100 GB worth of buckets if lets say you had 3 indexers with buckets that were 3 months old and you forced all your buckets from these 3 indexers to move to cold. As long as they had write access to your storage should there even be inflight dbs? or is that too much to write at once and Splunk is like nah I dont think so. And therefore writes some data but the rest it just make some error log and calls it a day.   So is there a limit as to how much can be writen to cold at one time? If it is writing and that write gets interupted then why does it see that and just resume where it left off to complete the transfer? I know there are logs  but seems to me it should be like watching a movie with internet I should be able to pause and then resume when I'm ready or better yet if  100 buckets start writing and some technical issue happens at 1/4 or halfway then that bucket writing should either cancel full stop and tell me in plain language or pause and resume when connection is back up.
Default System Timezone is selected in my preferences. I don't think is the problem because my other searches are working fine.
planning to deploy a splunk architecture with one SH and indexer cluster with 100GB/day data ingestion. Are there any recommendation documentations for OS partitions (paths with size), mount points a... See more...
planning to deploy a splunk architecture with one SH and indexer cluster with 100GB/day data ingestion. Are there any recommendation documentations for OS partitions (paths with size), mount points and RAID configuration for linux servers.
I am new to regex. I want to just extract Catalog-Import from the below query.. can anyone help how i can do this?  [2024-08-22 12:55:56.439 GMT] ERROR CustomJobThread|1154761894|Catalog-Import|Get... See more...
I am new to regex. I want to just extract Catalog-Import from the below query.. can anyone help how i can do this?  [2024-08-22 12:55:56.439 GMT] ERROR CustomJobThread|1154761894|Catalog-Import|GetNavigationCatalogFromSFTP com.demandware.api.net.SFTPClient Sites-ks_jp_rt-Site JOB faadaf233c 09beff21183cec83f264904132 5766054387038857216 - SFTP connect operation failed