All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You should look at using streamstats - here's an example that creates 10 events where every 4th event changes from warning to critical. | makeresults count=10 | streamstats c | eval _time=now() - c ... See more...
You should look at using streamstats - here's an example that creates 10 events where every 4th event changes from warning to critical. | makeresults count=10 | streamstats c | eval _time=now() - c | eval type=if(c % 4 = 0, "critical", "warning") | fields - c | sort - _time | streamstats count reset_after="("type=\"warning\"")" by type | where count=1 AND type="critical" To give you an exact solution would need to know more about your requirement. This will give 2 results when the type changes to critical from warning
Are the time ranges for both searches the same - if the search is to "now" as latest time, then naturally they could come up with different results depending on when the search is dispatched and how ... See more...
Are the time ranges for both searches the same - if the search is to "now" as latest time, then naturally they could come up with different results depending on when the search is dispatched and how long it takes to run. I am guessing these are some kind of requests, so MA->COSMOS->PHB - is a negative figure not possible? Presumably there can be requests from COSMOS->PHB at the start of the search window that do not have corresponding requests inside the range from MA->COSMOS - without knowing your environment it's impossible to know.
Are you saying you want to remove the milliseconds and timezone specifier or are you saying that your epoch time does not convert correctly, as this time in your message 1714363262.904000  is not act... See more...
Are you saying you want to remove the milliseconds and timezone specifier or are you saying that your epoch time does not convert correctly, as this time in your message 1714363262.904000  is not actually the time 2024-04-29T12:01:15.710Z When you use strptime to parse that time, you will get a time in your local time. If you are in GMT then it is the same, but here in Australia, I get a time that represents 2024-04-29 22:01:15.710 AEST, i.e. 10 hours later than the Zulu time. If you are just looking to remove the milliseconds and time zone indicator, then just reformat using  | eval latest_time=strftime(strptime(latest_time, "%FT%T.%Q%Z"), "%F %T") Note that %F is shorthand for %Y-%m-%d and %T is a shortcut for %H:%M:%S Note that that new time will be in your local time.  If you don't care about time zones at all and simply want to remove the T, milliseconds and Z then you could just use sed, i.e. | rex mode=sed field=latest_time "s/\.\d+Z// s/T/ /"      
Need a little more information about the real data and its format, but if you want to ignore the first 4 lines, which are terminated by a linefeed then get the rest of the data, see this example | m... See more...
Need a little more information about the real data and its format, but if you want to ignore the first 4 lines, which are terminated by a linefeed then get the rest of the data, see this example | makeresults | fields - _time | eval _raw="line 1 line 2 line 3 line 4 line 5 line 6" | rex "(?ms)([^\n]*\n){4}(?<copyofraw>.*)"
Here's an example you can run in the search window - you are interested in the last two lines : rex statement and the final eval statement. | makeresults | fields - _time | eval source=split("/test... See more...
Here's an example you can run in the search window - you are interested in the last two lines : rex statement and the final eval statement. | makeresults | fields - _time | eval source=split("/test1/folder1/scripts/monitor/log/env/dev/Error.log,/test1/folder1/scripts/monitor/log/env/test/Error.log", ",") | mvexpand source | rex field=source ".*\/(?<env>\w+)\/.*" | eval environment=case(env="dev","development",env="test","loadtest",true(), "unknown:".env) There are several ways you can assign the name to the environment - if you have lots of environments you can do this from a lookup or just use the case statement.
_raw= line 1 line 2 line 3 line 4 line 5 line 6 how to define another new field "copyofraw"  to contain just line 5 and line 6
Hi, How do I extract word "Dev" from below file location source=/test1/folder1/scripts/monitor/log/env/dev/Error.log and add some if condition statements like if word=dev,change it to development ... See more...
Hi, How do I extract word "Dev" from below file location source=/test1/folder1/scripts/monitor/log/env/dev/Error.log and add some if condition statements like if word=dev,change it to development word=test,change it to loadtest in splunk query.   Thanks    
This is not how rex works - you need to provide a pattern as a regular expression to identify what you want to extract. For example, do you want everything from "change" to "}}"? Does this pattern ho... See more...
This is not how rex works - you need to provide a pattern as a regular expression to identify what you want to extract. For example, do you want everything from "change" to "}}"? Does this pattern hold true for all your event where you want to extract this field? Aside from that, this looks like json - why aren't you using spath or the other json functions to extract the json field?
Try it this way | eval successtime=if(status=200,_time,null()) | streamstats range(successtime) as successrange count(successtime) as successcount window=3 by status reset_on_change=t | where succes... See more...
Try it this way | eval successtime=if(status=200,_time,null()) | streamstats range(successtime) as successrange count(successtime) as successcount window=3 by status reset_on_change=t | where successcount=3 and successrange > 10
Thank you for your time and response. I now don't see double quotes in the search query. This is helpful. startswith="my start msg" endswith="my end msg" --> works startswith IN ("my start msg1", "... See more...
Thank you for your time and response. I now don't see double quotes in the search query. This is helpful. startswith="my start msg" endswith="my end msg" --> works startswith IN ("my start msg1", "my start msg2", "my start msg3") endswith="my end msg"  ---> This is honoring only endswith flag and not returning events starting with my start msg lines "my start msg1" or "my start msg2" or "my start msg3"  I notice that splunk search returns events before these matching startswith fields  I will open a different question for that.
I am new to administrating Splunk Enterprise Server. I'm guessing the answer is obvious to some, but I'm getting confused trying to figure out a solution from the documentation. We are using Splunk ... See more...
I am new to administrating Splunk Enterprise Server. I'm guessing the answer is obvious to some, but I'm getting confused trying to figure out a solution from the documentation. We are using Splunk Enterprise Server v 9.2.1 stand-alone on an isolated network. We primarily collect and report on multiple systems' audit logging.  The server is set up and I can see ingested logs arriving and create reports on the data. But I need one more thing. I must archive all the original data exactly as it is received on the TCP receiver and copy it to offline storage for safe keeping. I need to be able to re-ingest the raw data at some future date, but that seems pretty straightforward. How can I do this?  Is there some way I can grab the data being received on my TCP port listener in RAW form or some magic I need to do with some indexer or forwarder->Receiver string?   I'm sure I'm not the first person to need this... How do others accomplish this stuff? Thank You!
Can I change the default message in the Alert Trigger "Send Email" ? I have been looking around and cant find anything where I could change this. My goal is to create a template message so we can str... See more...
Can I change the default message in the Alert Trigger "Send Email" ? I have been looking around and cant find anything where I could change this. My goal is to create a template message so we can stream light our alert messages. Any help would be great !    
Unfortunately this doesn't help in this scenario as the issue is Data Model Wrangler seeing the shared knowledge objects of other apps, Not the visibility of Data Model Wrangler shared knowledge obje... See more...
Unfortunately this doesn't help in this scenario as the issue is Data Model Wrangler seeing the shared knowledge objects of other apps, Not the visibility of Data Model Wrangler shared knowledge objects
in raw data I have portion that I would like to use in report. "changes":{"description":{"before":"<some text or empty>","after":"<some text or empty>"}}   I created  rex summary= "change... See more...
in raw data I have portion that I would like to use in report. "changes":{"description":{"before":"<some text or empty>","after":"<some text or empty>"}}   I created  rex summary= "changes":\{"description":\{"before":"<some text or empty>","after":"<some text or empty>"\}\})" But it doesn't work. Please advise
The strptime function converts a timestamp from text format into integer (epoch) format.  To convert from one text format into another, use a combination of strptime and strftime (which converts epoc... See more...
The strptime function converts a timestamp from text format into integer (epoch) format.  To convert from one text format into another, use a combination of strptime and strftime (which converts epochs into text). | eval latest_time = strftime(strptime(latest_time, "%Y-%m-%dT%H:%M:%S.%3N%Z"), "%Y-%m-%d %H:%M:%S.%3N%Z")  Or you could use SED to replace the "T" with a space. | rex mode=sed field=latest_time "s/(\d)T(\d)/\1 \2/"
Yes, I suspected that would happen, maybe try: 1. Stop Splunk if you can 2. Backup /opt/splunk/etc/apps folder  (So you have your App configs at least) 3. For your data if you are using the defa... See more...
Yes, I suspected that would happen, maybe try: 1. Stop Splunk if you can 2. Backup /opt/splunk/etc/apps folder  (So you have your App configs at least) 3. For your data if you are using the default in $SPLUNK_HOME/var/lib/splunk folder - you can be move to a temp folder as well, but if you had a seperate volume even better - it wont get touched. 4. Re-install Splunk over the current broken install  and see if that works (I suspect not) but worth a go 5. If it works restore the /opt/splunk/etc/apps folder and your data . (make sure you set the splunk permissions - chown -R splunk:splunk etc to the Splunk and data folders If that all fails, then may be wipe it clean and start again, if you keep it as it is not going to bode well for the future as you will have other upgrades to do in the future  and it will always cause some kind of problem, so better to sort it all out now and make it clean. If it was me, I would start clean again, less issues in the longer run.
Warm and cold buckets can be copied safely while Splunk is running.
Find your Knowledge object and the owner - look at the below example and change as to your requirments. example curl -k -u admin_user:password https://<MY_CLOUD_STACK>splunkcloud.com:8089/servicesN... See more...
Find your Knowledge object and the owner - look at the below example and change as to your requirments. example curl -k -u admin_user:password https://<MY_CLOUD_STACK>splunkcloud.com:8089/servicesNS/nobody/YOU_APP/saved/searches/my_search/acl -d 'owner=new_user'-d 'sharing=global' -X POST Heres some further help on ACL in cloud https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/RESTTUT/RESTbasicexamples
In my index I don't see all the logs being forwarder by the Splunk UF. How can monitor when event is drop from event queue on the Spunk UF. Can I monitor this in Splunk Deployment server?
We are in the midst of a migration from physical servers to virtual servers, and we wonder if stopping Splunk is mandatory in order to perform the Cold Data migration or if there’s a workaround to th... See more...
We are in the midst of a migration from physical servers to virtual servers, and we wonder if stopping Splunk is mandatory in order to perform the Cold Data migration or if there’s a workaround to this and this can be safely done without stopping Splunk.