All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Is is possible to get it to the last row??? 
You could do something like this using eventstats: | makeresults format=json data="[{\"device\":\"foo\", \"user_numbers\":19}, {\"device\":\"foo\", \"user_numbers\":39}, {\"device\":\"bar\", \"user_... See more...
You could do something like this using eventstats: | makeresults format=json data="[{\"device\":\"foo\", \"user_numbers\":19}, {\"device\":\"foo\", \"user_numbers\":39}, {\"device\":\"bar\", \"user_numbers\":39}, {\"device\":\"foo\", \"user_numbers\":44}]" | eventstats dc(user_numbers) as overall_distinct_user_count | stats dc(user_numbers), first(overall_distinct_user_count) as overall_distinct_user_count by device  
So, I want the the distinct count of user_numbers by device, but in the same chat/table, I want the distinct count of all the user_numbers in the last column  called total is this possible to get a s... See more...
So, I want the the distinct count of user_numbers by device, but in the same chat/table, I want the distinct count of all the user_numbers in the last column  called total is this possible to get a stats formula with different fields in the by?  This is some thing that I have: | stats dc(user_numbers) by device but I also want to show in the same table as the total: | stats dc(user_numbers) right now I count duplicates and show this:  | addcoltotals label="Total Members" labelfield=device I really hope this is possible! Thank you!
Ai to assist in creating valid regex expressions would be super helpful.
We are on Splunk Cloud 9.1 Has anyone successfully been able to ingest data from sendgrid into splunk? It looks like the only option they have is for a webhook that requires a URL to send to.  I ... See more...
We are on Splunk Cloud 9.1 Has anyone successfully been able to ingest data from sendgrid into splunk? It looks like the only option they have is for a webhook that requires a URL to send to.  I am no Splunk wizard so I may just be missing the easy answer here. But I can't find a way to generate a URL for SendGrid to sent into Splunk Cloud.
What would be a better method to find events where the message field contains " new_state: Diagnostic, old_state: Home" as opposed to wildcards? I am looking for the events directly chronologically... See more...
What would be a better method to find events where the message field contains " new_state: Diagnostic, old_state: Home" as opposed to wildcards? I am looking for the events directly chronologically after the keystone event. that is a time stamp more recent than the keystone event? How would I structure this streamstats command in the rest of my query? That is, there is separate criteria which the data events must meet in order to be viable
Thanks for the reply. Regarding forwarding to two output groups, is there some doc that describes this in detail? In any case, given that we have multiple app log files and script output already goin... See more...
Thanks for the reply. Regarding forwarding to two output groups, is there some doc that describes this in detail? In any case, given that we have multiple app log files and script output already going to one of the enterprise servers, how difficult will it be to segregate which log files go to which Enterprise server?  
OK. There are some thing highly suboptimal with your search (especially the use of wildcards). But I'm not digging into it at the moment. Also be aware that "next" might not mean the same for everyo... See more...
OK. There are some thing highly suboptimal with your search (especially the use of wildcards). But I'm not digging into it at the moment. Also be aware that "next" might not mean the same for everyone so you should be precise when specifying your problem. By default Splunk returns data in reverse chronological order so Splunk's "next" event will actually be a previous event time-wise. Anyway, the way to match something and some subsequent events (again - in Splunk's order - you might want to reverse or sort your events before doing so) is to use streamstats command with count function and change_on parameter and then simply filter on events only having that count value below given threshold.
Hello again, my last entry "i've parsed my InputFile (json-parser) and before one of the missing event there is an error, like unexpected non-white-space sign. So i think, it is not a problem of s... See more...
Hello again, my last entry "i've parsed my InputFile (json-parser) and before one of the missing event there is an error, like unexpected non-white-space sign. So i think, it is not a problem of splunk! " was a wrong result. I've made a mistake in my investigation. So i tried the programm jq (ubuntu-linux) to validate the whole json-file. Surprise - there is no failure in the json-file. I've checked the json-file in the forwarder-directory. So i guess there is a sign in the data,  that splunk "misunderstand" and break the json-structure.
What do you mean by "it's not working"? It's supposed to work on contents of a given field. This field must be extracted before you use the rex command. Is it extracted?
I am looking to record a measurement which is taken after the transition from Home state to Diagnostic State, I am calling the state change the keystone event the raw keystone event looks like so: ... See more...
I am looking to record a measurement which is taken after the transition from Home state to Diagnostic State, I am calling the state change the keystone event the raw keystone event looks like so: {"bootcount":26,"device_id":"X","environment":"prod_walker","event_source":"appliance","event_type":"GENERIC", "location":{"city":"X","country":"X","latitude":X,"longitude":X,"state":"X"},"log_level":"info", "message":"client: GCL internal state { new_state: Diagnostic, old_state: Home, conditions: 65600, error_code: 0}", "model_number":"X1","sequence":274,"serial":"123X","software_version":"2.3.1.7682","ticks":26391,"timestamp":1723254756}  my search to find the keystone event looks like: index="june_analytics_logs_prod" message=* new_state: Diagnostic, old_state: Home* NOT message=*counts*|  After the keystone event, I would like to take the measurements found in the immediate next 5 events, i will call these the data events. the raw data events look like: {"bootcount":26,"device_id":"x","environment":"prod_walker","event_source":"appliance","event_type":"GENERIC", "location":{"city":"X","country":"X","latitude":X,"longitude":X,"state":"X"},"log_level":"info", "message":"client: fan: 2697, auger: 1275, glow_v: 782, glow: false, fuel: 0, cavity_temp: 209", "model_number":"X1","sequence":280,"serial":"123X","software_version":"2.3.1.7682","ticks":26902,"timestamp":1723254761} I would like to take the first 5 data events directly after the keystone event, extract the glow_v value and take the median of these 5 values as the accepted value.   In short, want to build a query to find the time of a keystone event, use this time to find the immediately proceeding data events that match certain criteria, extract the glow_v value from these data events and then take the median of these glow_v values
Hi @PickleRick, our requirement is to set up alert on this logs and we need to trigger an alert if any failures are there greater than 0 I tied the rex u provided it’s not working, as u suggested m... See more...
Hi @PickleRick, our requirement is to set up alert on this logs and we need to trigger an alert if any failures are there greater than 0 I tied the rex u provided it’s not working, as u suggested may I know how can we do via spath  
Sorry, I should have been more clear. I do see the files (/var/log/cron and /var/log/audit/audit.log) I am troubleshooting when I run "splunk list monitor" and then it matches when I run "splunk lis... See more...
Sorry, I should have been more clear. I do see the files (/var/log/cron and /var/log/audit/audit.log) I am troubleshooting when I run "splunk list monitor" and then it matches when I run "splunk list inputstatus" The "inputstatus" command shows: /var/log/share file position=xxxxxx size=<same-as-above> percent=100 type=finished reading the file /var/log/audit/audit.log  file position=xxxxxx size=<same-as-above> percent=100 type=open file
No. It could be complicated to install two UF instances on one host. Especially on Windows. If you're configuring tcpout outputs, you can just set up two output groups and send to both.
Hi Splunkers, I'm trying to get diskusage for searches running by user. | rest /services/search/jobs | rex field=eventSearch "index\s*=(?<index>[^,\s)]+)" | search index=$ind$ | eval size_MB = d... See more...
Hi Splunkers, I'm trying to get diskusage for searches running by user. | rest /services/search/jobs | rex field=eventSearch "index\s*=(?<index>[^,\s)]+)" | search index=$ind$ | eval size_MB = diskUsage/1024/1024 | stats sum(size_MB) as size_MB by author | rename author as user Is there a way to get diskusage for historical log's like for a month or more. ?
Thanks for reponse. Ill get into tomorrow. More info. Its all the one source in splunk (1 x syslog spanning 30 days) My search = "ACCESS BLOCK" My results are many rows of = XXXXXXXXXXX XXXXXXXX... See more...
Thanks for reponse. Ill get into tomorrow. More info. Its all the one source in splunk (1 x syslog spanning 30 days) My search = "ACCESS BLOCK" My results are many rows of = XXXXXXXXXXX XXXXXXXXXXX XXXXXXXXXXX Local1.Warning 172.30.31.4 Aug 12 23:16:09 2024 CXXXXXXXXXXX0 src="45.148.10.81:18837" dst="XXXXXXXXXXX:443" msg="surfshark.com:Anonymizers, SSI:N" note="ACCESS BLOCK" user="unknown" devID="XXXXXXXXXXX" cat="URL Threat Filter" host = XXXXXXXXXXX.splunkcloud.comsource = Syslog-CatchAll2024-08-12.txtsourcetype = 1-Zyxel XXXXXXXXXXX XXXXXXXXXXX XXXXXXXXXXX Local1.Warning 172.30.31.4 Aug 12 23:16:09 2024 CXXXXXXXXXXX0 src="45.148.10.87:6139" dst="XXXXXXXXXXX:443" msg="surfshark.com:Anonymizers, SSI:N" note="ACCESS BLOCK" user="unknown" devID="XXXXXXXXXXX" cat="URL Threat Filter" host = XXXXXXXXXXX.splunkcloud.comsource = Syslog-CatchAll2024-08-12.txtsourcetype = 1-Zyxel I then want to seach again but remove every line that has src="45.148.10.81:18837" OR src="45.148.10.87:6139" OR (the next) OR (the next) OR (and so on for 3000+ IP addresses) Thus giving me a data set of "known good traffic"    
When including a results field in an alert, only the first result is included.  There is no way to include anything other than the first result.
So I got it to work but it is only after giving up completely on logic.  So when installing the app the message is clearly asking for the username and password only not an email and password.  To mak... See more...
So I got it to work but it is only after giving up completely on logic.  So when installing the app the message is clearly asking for the username and password only not an email and password.  To make my life easier I gave the same password to everything right so I wouldn't have to juggle passwords for a demo. So imagine my surprise when it kept saying the wrong username and password, so I finally said the heck with it and used my full email as my username and typed the password again, how about it worked and installed the app without any issues.  So my only conclusion is this has to be worded wrong and is in fact asking for your email and password from the webpage sign-in and not the admin username and password which you would assume it is.
We are using the below query for our alert,  when we receive mail we want to see MESSAGE in Alert title. In subject we give Splunk Alert: $name$. in this place when the alert is triggered we want to... See more...
We are using the below query for our alert,  when we receive mail we want to see MESSAGE in Alert title. In subject we give Splunk Alert: $name$. in this place when the alert is triggered we want to view that Messages in the alert title. We tried giving Splunk Alert: $result.Message$, here only 1 message is showing up not all. how can we do it??? Query: index=app-index "ERROR" |eval Message=case( like(_raw, "%internal error system%"), "internal error system", like(_raw, "%connection timeout error%"), "connection timeout error", like(_raw, "%connection error%"), "connection error", like(_raw, "%unsuccessfull application%"), "unsuccessfull application", like(_raw, "%error details app%"), "error details app", 1=1, null()) |stats count by Message |eval error=case( Message="internal error system" AND count >0,1, Message="connection timeout error" AND count >0,1, Message="connection error" AND count >0,1, Message="unsuccessfull application" AND count >0,1, Message="error details app" AND count >0,1) |search error=1  
The general answer is yes - you can filter out events. The way to do it specific to your need will depend on your precise use case. Within Splunk you can do it like this https://docs.splunk.com/Doc... See more...
The general answer is yes - you can filter out events. The way to do it specific to your need will depend on your precise use case. Within Splunk you can do it like this https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad If you can filter out in Azure so you simply don't send data to Splunk - even better. But this is out of scope of this forum and you have to ask some experienced Azure admins how to do so.