All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This question is best asked of your Splunk account team.
Has anyone run into the interesting effect that isnum() thinks that "NaN" is a number? So isnum("NaN") is true "NaN" * 2 = "NaN" but tonumber("NaN") is NULL Are there any other odd, uh, num... See more...
Has anyone run into the interesting effect that isnum() thinks that "NaN" is a number? So isnum("NaN") is true "NaN" * 2 = "NaN" but tonumber("NaN") is NULL Are there any other odd, uh, numbers besides Not a Number? I made up the following silly query as an illustration:       | makeresults | eval num="blubb;NaN;100;0.5;0,5;-0;NULL;" | makemv delim=";" allowempty=true num | mvexpand num | eval isnum=if(isnum(num),"true","false") | eval isint=if(isint(num),"true","false") | eval isnull=if(isnull(num),"true","false") | eval calcnum=num*2 | eval isnumcalcnum=if(isnum(calcnum),"true","false") | eval isnullcalcnum=if(isnull(calcnum),"true","false") | eval numnum=tonumber(num) | eval isnumnum=if(isnum(numnum),"true","false") | eval isnullnumnum=if(isnull(numnum),"true","false") | table num,isnum,isint,isnull,calcnum,isnumcalcnum,isnullcalcnum,numnum,isnumnum,isnullnumnum          which results in num isnum isint isnull calcnum isnumcalcnum isnullcalcnum numnum isnumnum isnullnumnum                     blubb false false false   false true   false true NaN true false false NaN true false   false true 100 true true false 200 true false 100 true false 0.5 true false false 1 true false 0.5 true false 0,5 false false false   false true   false true -0 true true false -0 true false -0 true false NULL false false false   false true   false true   false false false   false true   false true (Post moved over from the Splunk Enterprise group.)
Ok thank you for the clarity... I think someone should revise those steps then, its ambiguous.
Close but. Depending on your environment, some components simply don't have to be replaced in place (for example HFs running modular inputs; caveat - network traffic and inputs state; also cluster m... See more...
Close but. Depending on your environment, some components simply don't have to be replaced in place (for example HFs running modular inputs; caveat - network traffic and inputs state; also cluster manager does not _store_ any state locally - it builds up its in-memory database from what it queries from cluster peers). If you want to do the "raze and replace", you'd rather install a fresh Splunk package and then unpack your archive (generally you want the package so that package manager's database is consistent with what's on the disk and you can still upgrade the softwarde properly). $SPLUNK_HOME/var doesn't contain only indexed data. It contains fishbucket, kvstore, possibly state of modular inputs... You might want to exclude var/log though.
I made some assumptions about the format of your data because I don't know your raw data but the mechanism is relatively sound. My search should count precisely what you're saying. Let's take an ex... See more...
I made some assumptions about the format of your data because I don't know your raw data but the mechanism is relatively sound. My search should count precisely what you're saying. Let's take an excerpt from your table and translate it into single events in the format I talked about(if you don't have them as separate ASRT and ATOT_ALDT, you'd have to split them in your base search). This is already sorted by time displayed_flyt_no _time event flight10 29/02/2024 05:49 ASRT flight7 29/02/2024 05:51 ATOT_ALDT flight11 29/02/2024 05:57 ASRT flight8 29/02/2024 06:01 ATOT_ALDT flight12 29/02/2024 06:03 ASRT flight9 29/02/2024 06:04 ATOT_ALDT flight10 29/02/2024 06:08 ATOT_ALDT flight11 29/02/2024 06:10 ATOT_ALDT flight12 29/02/2024 06:14 ATOT_ALDT flight13 29/02/2024 06:19 ATOT_ALDT So if you now add that streamstatsed count, you'd get this: displayed_flyt_no _time event times_busy flight10 29/02/2024 05:49 ASRT 0 flight7 29/02/2024 05:51 ATOT_ALDT 1 flight11 29/02/2024 05:57 ASRT 1 flight8 29/02/2024 06:01 ATOT_ALDT 2 flight12 29/02/2024 06:03 ASRT 2 flight9 29/02/2024 06:04 ATOT_ALDT 3 flight10 29/02/2024 06:08 ATOT_ALDT 4 flight11 29/02/2024 06:10 ATOT_ALDT 5 flight12 29/02/2024 06:14 ATOT_ALDT 6 flight13 29/02/2024 06:19 ATOT_ALDT 7 So now you group the flights with the stats and get displayed_flyt_no states busy flight7 ATOT_ALDT 1 flight8 ATOT_ALDT 2 flight9 ATOT_ALDT 3 flight10 ASRT,ATOT_ALDT 0,4 flight11 ASRT,ATOT_ALDT 1,5 flight12 ASRT,ATOT_ALDT 2,6 flight13 ATOT_ALDT 7 So we now know that flights 7,8,9 and 13 only landed (I assume - they didn't ASRT so they only occupied the runway becaues of ATOT_ALDT). And we're not interested in those because they didn't wait in queue. So we're filtering them out by our "where" command and we're left with just displayed_flyt_no states busy flight10 ASRT,ATOT_ALDT 0,4 flight11 ASRT,ATOT_ALDT 1,5 flight12 ASRT,ATOT_ALDT 2,6 Now if we calculate max(busy)-min(busy)-1, we'll see that all those flights waited in a queue of length 3. And it's the same values as you have in your table.
Thank you for the reply. RE: "swap" method, yeah I thought about that and also share your apprehension. RE: " deploy new component", yeah I agree with that method for idxc peers and shc members... ... See more...
Thank you for the reply. RE: "swap" method, yeah I thought about that and also share your apprehension. RE: " deploy new component", yeah I agree with that method for idxc peers and shc members... But check this out... please LMK what you think Per Splunk docs >>> docs.splunk.com/Documentation/Splunk/9.2.0/Installation/MigrateaSplunkinstance "Migrate a Splunk Enterprise instance from one physical machine to another" "When to migrate" "Your Splunk Enterprise installation is on an operating system that either your organization or Splunk no longer supports, and you want to move it to an operating system that does have support." "How to migrate" The Steps say >>> Stop Splunk Enterprise services on the host from which you want to migrate. Copy the entire contents of the $SPLUNK_HOME directory from the old host to the new host. Copying this directory also copies the mongo subdirectory. Install Splunk Enterprise on the new host. The way I read this is... 1) Stop Splunk on the old box 2) tar up the /opt/splunk on the old box e.g. > tar -cjvf $(hostname)_splunk-og.tar.bz2 --exclude=./var/* --exclude=./$(hostname)*bz2 ./ 3) move and untar the .bz2 file on the new box in /opt  e.g. > tar -xjvf <hostname>_splunk-og.tar.bz2 -C /opt/splunk 4 ) install a clean copy (downloaded from Splunk) of the same version of Splunk on top of the old copies Apparently someone that documented this believes this is the way to go... What do you think? RE: my --exclude=./var/* that is for boxes that don't contain indexed data RE: my --exclude=./$(hostname)*bz2./  this is because I am running the tar from /opt/splunk dir Thank you
Hi, In a table, I am looking to get a field value from previous available value in case its null. In below screenshot, dataset is basically  queries pulling out some DB records.  for same query... See more...
Hi, In a table, I am looking to get a field value from previous available value in case its null. In below screenshot, dataset is basically  queries pulling out some DB records.  for same query events are spiltted in multiple events. (Incremental records) Issue is query is not populating in each events. (Just 1st event)  I am trying to fill the query value from 1st event to all subsequent I have used streamstats which is almost working but skipping for some use case. | streamstats current=f last(query) as previous_query reset_before="("match(query,\"\")")" by temp_field   May  be if we can logic to assign value where previous record is < current record and query is empty. previous records | streamstats current=f window=1 last(records) as pre_records reset_before="("match(query,\"\")")" by temp_field
Hi, Have integrate logs to Splunk via S3 SQS successful. Have asked CM team dumps the logs to client S3 bucket from there on , we are pulling them to Splunk. thank you all your inputs.
got it thank you.
Hi,  We tried to integrate beyontrust privileged remote support app integration to splunk for gettting the logs from BT PRA as per the documentation beyondtrust.  https://www.beyondtrust.com/docs/r... See more...
Hi,  We tried to integrate beyontrust privileged remote support app integration to splunk for gettting the logs from BT PRA as per the documentation beyondtrust.  https://www.beyondtrust.com/docs/remote-support/how-to/integrations/splunk/configure-splunk.htm Documentation has some 4-5 steps to configure in data inputs 1.Input name . 2.Client ID & token received from the beyond trust post once they enable the api. 3.PRA site id. 5.index name 6.source type  we have provided these details but we are unable to see the logs coming to Splunk. However, when check in index=_internal able to beyondtrust config logs in Splunk but not the actual event logs from beyond trust pra. Could you please kindly let me know if anyone has integrated BT PRA and if any troubleshooting steps/guidance to confirm that there is issue from Splunk side so that. will ask the BT team to further check from there end. This app is  not Splunk developed.no support from Splunk.  Thank you & much appreciated for your responses.  
OK. Something is definitely weird with your setup then, I did a quick test on my home lab. # cat props.conf [routeros] TRANSFORMS-add_source_based_field = add_source_based_field # cat transfo... See more...
OK. Something is definitely weird with your setup then, I did a quick test on my home lab. # cat props.conf [routeros] TRANSFORMS-add_source_based_field = add_source_based_field # cat transforms.conf [add_source_based_field] REGEX = udp:(.*) FORMAT = source_source_port::$1 WRITE_META = true SOURCE_KEY = MetaData:Source REPEAT_MATCH = false As you can see, for events coming from my mikrotik router it calls a transform which adds a field called source_source_port containing the port number extracted from the source field. And it works. So the mechanism is sound and the configuration is pretty OK. Now the question is why it doesn't work for you. One thing which can _sometimes_ be tricky here (but it's highly unlikely that you have that problem with all your sourcetypes) is that it might be not that obvious which config is effective when your sourcetype is recast on ingestion because props and transforms are applied only for the original sourcetype even if it's changed during the processing in the pipeline (I think we already talked about it :-)) . But as far as I recognize some of your sourcetypes at least some of them are not recast (pps_log for sure, for example)
Lets take flight16 in my example. It had an ASRT timestamp of 06:19 and an ATOT_ALDT of 06:32. Flight15 had an ATOT_ALDT of 06:19, flight14 06:29 and flight15 06:31, so that's 3 flights that used the... See more...
Lets take flight16 in my example. It had an ASRT timestamp of 06:19 and an ATOT_ALDT of 06:32. Flight15 had an ATOT_ALDT of 06:19, flight14 06:29 and flight15 06:31, so that's 3 flights that used the runway between the ASRT and ATOT of flight16.
The Cloud Monitoring Console may help with that.  Go to License usage->Workload then scroll down to "SVC usage per hour by top 10 app" and change the "View by" dropdown to "estimated SVC".
Thanks @PickleRick, however, that search doesn't work for me.  Not sure if I've made it clear enough.  So for every single departing flight in the table (DepOrArr=D), I need to count the total ... See more...
Thanks @PickleRick, however, that search doesn't work for me.  Not sure if I've made it clear enough.  So for every single departing flight in the table (DepOrArr=D), I need to count the total of other flights who's ATOT_ALDT time was between the ASRT timestamp and ATOT_ALDT timestamp of that flight. So if Flight1234 has and ASRT of 09:00 and an ATOT_ALDT of 09:15, how many other flights in the list had an ATOT_ALDT timestamp between those 2 times. And then so on for the next flight...etc etc. What makes this more complicated is a flight can have an ASRT timestamp after another flight but still have an ATOT_ALDT timestamp before.
Thank you so much. I will go forward with splitting on the colon.  Also want to add that I appreciate when time is taken to explain the 'why' behind commands and why they act the way they do. It def... See more...
Thank you so much. I will go forward with splitting on the colon.  Also want to add that I appreciate when time is taken to explain the 'why' behind commands and why they act the way they do. It definitely helps me learn and retain information. Thanks again. 
| makeresults | eval _raw="{ \"timeStamp\": \"2024-02-29T10:00:00.673Z\", \"collectionIntervalInMinutes\": \"1\", \"node\": \"plgiasrtfing001\", \"inboundErrorSummary\": [ { \"name\": \"400B... See more...
| makeresults | eval _raw="{ \"timeStamp\": \"2024-02-29T10:00:00.673Z\", \"collectionIntervalInMinutes\": \"1\", \"node\": \"plgiasrtfing001\", \"inboundErrorSummary\": [ { \"name\": \"400BadRequestMalformedHeader\", \"value\": 1 }, { \"name\": \"501NotImplementedMethod\", \"value\": 2 }, { \"name\": \"otherErrorResponses\", \"value\": 1 } ] }| { \"timeStamp\": \"2024-02-29T10:00:00.674Z\", \"collectionIntervalInMinutes\": \"1\", \"node\": \"plgiasrtfing001\", \"inboundErrorSummary\": [ { \"name\": \"400BadRequestMalformedHeader\", \"value\": 10 }, { \"name\": \"501NotImplementedMethod\", \"value\": 5 }, { \"name\": \"otherErrorResponses\", \"value\": 6 } ] }" | makemv _raw delim="|" | rename _raw as raw | mvexpand raw | rex field=raw "timeStamp\"\: \"(?<_time>[^\"]+)" | rename raw as _raw ```Below is the SPL you need potentially``` | spath inboundErrorSummary{} | mvexpand inboundErrorSummary{} | spath input=inboundErrorSummary{} | chart values(value) over _time by name
Can you test in your dedicate single server test environment with sample data? That way you could be sure that those confs are correct. After that just install those into correct place on your real de... See more...
Can you test in your dedicate single server test environment with sample data? That way you could be sure that those confs are correct. After that just install those into correct place on your real deployment.
I have had several hundreds sourcetypes in m case Fortunately we have automation for generating those props.conf.
You won't find search bundles on your HFs.  They're passed from SH to indexers. Run this search to find out how big your lookup files are.  Make sure none are growing unexpectedly.  Trim the ones yo... See more...
You won't find search bundles on your HFs.  They're passed from SH to indexers. Run this search to find out how big your lookup files are.  Make sure none are growing unexpectedly.  Trim the ones you can.  Consider adding the ones that don't need to be on the indexers to your deny list. index=_audit host=sh* isdir=0 size lookups (action=update OR action=created OR action=modified OR action=add) NOT action=search | stats latest(eval(size/1024/1024)) as size_mb latest(_time) as _time by path | rex field=path "users\/(?<user>.*?)\/" | rex field=path "\/apps\/(?<app>.*?)" | fields _time path size_mb app file | sort 0 - size_mb
OK. Assuming you have _time and "event" (being either ASRT or ATOT_ALDT). <your base search> | sort _time | streamstats count(eval(if(event="ATOT_ALDT",1,0)) as times_busy This will give you addi... See more...
OK. Assuming you have _time and "event" (being either ASRT or ATOT_ALDT). <your base search> | sort _time | streamstats count(eval(if(event="ATOT_ALDT",1,0)) as times_busy This will give you additional column called "times_busy" saying how many times up to this  point in time you had an event of the runway being busy due to ATOT_ALDT. So now you can do (assuming you're interested only in that parameter and we can just use stats and forget about everything else) | stats values(state) as states values(times_busy) as busy by displayed_flyt_no Now assuming you're interested only in those for which you had both states reported | where states="ASRT" AND states="ATOT_ALDT" And you can calculate your queue with | eval queue=max(busy)-min(busy)-1