All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You said that you have CLONE_SOURCETYPE in use. Are you applied these transforms to original or cloned ST?
Ahhh ok, so the reason it wasn't working is because the ASRT and ATOT_ALDT were part of the same event, the example was effectively the events tabled. So now I have split the two timestamps into two ... See more...
Ahhh ok, so the reason it wasn't working is because the ASRT and ATOT_ALDT were part of the same event, the example was effectively the events tabled. So now I have split the two timestamps into two separate events, your code works (couple of typos in the streamstats line, but sorted those). This is what I did. Thanks for all your help! | eval asrt_epoch = strptime(ASRT,"%Y-%m-%d %H:%M:%S"), runway_epoch = strptime(ATOT_ALDT,"%Y-%m-%d %H:%M:%S"), event="ASRT_".asrt_epoch.","."ATOT_ALDT_".runway_epoch, event=if(isnull(event),"ATOT_ALDT_".runway_epoch,event) | makemv event delim="," | mvexpand event | rex field=event "^(?P<event>(ATOT_ALDT|ASRT))_(?P<_time>.+)$" | sort _time | streamstats count(eval(if(event="ATOT_ALDT",1,null()))) as times_busy | stats values(event) as states values(times_busy) as busy values(ATOT_ALDT) as ATOT_ALDT by displayed_flyt_no | sort ATOT_ALDT | where states="ASRT" AND states="ATOT_ALDT" | eval queue=(max(busy)-1)-(min(busy))
Once you find the big lookup files, use admin commands/UI to delete them.  Or use the Lookup File Editor to modify them.  You also can upload an app containing a distsearch.conf file to put a file on... See more...
Once you find the big lookup files, use admin commands/UI to delete them.  Or use the Lookup File Editor to modify them.  You also can upload an app containing a distsearch.conf file to put a file on the deny list.
Check out the fillnull and filldown commands.
Thanks for the search, is there an spl to trim or archive it?
+1 on that. Generally small licenses are enforcement licenses. You having a relatively small no-enforcement license is strange enough on its own.
This question is best asked of your Splunk account team.
Has anyone run into the interesting effect that isnum() thinks that "NaN" is a number? So isnum("NaN") is true "NaN" * 2 = "NaN" but tonumber("NaN") is NULL Are there any other odd, uh, num... See more...
Has anyone run into the interesting effect that isnum() thinks that "NaN" is a number? So isnum("NaN") is true "NaN" * 2 = "NaN" but tonumber("NaN") is NULL Are there any other odd, uh, numbers besides Not a Number? I made up the following silly query as an illustration:       | makeresults | eval num="blubb;NaN;100;0.5;0,5;-0;NULL;" | makemv delim=";" allowempty=true num | mvexpand num | eval isnum=if(isnum(num),"true","false") | eval isint=if(isint(num),"true","false") | eval isnull=if(isnull(num),"true","false") | eval calcnum=num*2 | eval isnumcalcnum=if(isnum(calcnum),"true","false") | eval isnullcalcnum=if(isnull(calcnum),"true","false") | eval numnum=tonumber(num) | eval isnumnum=if(isnum(numnum),"true","false") | eval isnullnumnum=if(isnull(numnum),"true","false") | table num,isnum,isint,isnull,calcnum,isnumcalcnum,isnullcalcnum,numnum,isnumnum,isnullnumnum          which results in num isnum isint isnull calcnum isnumcalcnum isnullcalcnum numnum isnumnum isnullnumnum                     blubb false false false   false true   false true NaN true false false NaN true false   false true 100 true true false 200 true false 100 true false 0.5 true false false 1 true false 0.5 true false 0,5 false false false   false true   false true -0 true true false -0 true false -0 true false NULL false false false   false true   false true   false false false   false true   false true (Post moved over from the Splunk Enterprise group.)
Ok thank you for the clarity... I think someone should revise those steps then, its ambiguous.
Close but. Depending on your environment, some components simply don't have to be replaced in place (for example HFs running modular inputs; caveat - network traffic and inputs state; also cluster m... See more...
Close but. Depending on your environment, some components simply don't have to be replaced in place (for example HFs running modular inputs; caveat - network traffic and inputs state; also cluster manager does not _store_ any state locally - it builds up its in-memory database from what it queries from cluster peers). If you want to do the "raze and replace", you'd rather install a fresh Splunk package and then unpack your archive (generally you want the package so that package manager's database is consistent with what's on the disk and you can still upgrade the softwarde properly). $SPLUNK_HOME/var doesn't contain only indexed data. It contains fishbucket, kvstore, possibly state of modular inputs... You might want to exclude var/log though.
I made some assumptions about the format of your data because I don't know your raw data but the mechanism is relatively sound. My search should count precisely what you're saying. Let's take an ex... See more...
I made some assumptions about the format of your data because I don't know your raw data but the mechanism is relatively sound. My search should count precisely what you're saying. Let's take an excerpt from your table and translate it into single events in the format I talked about(if you don't have them as separate ASRT and ATOT_ALDT, you'd have to split them in your base search). This is already sorted by time displayed_flyt_no _time event flight10 29/02/2024 05:49 ASRT flight7 29/02/2024 05:51 ATOT_ALDT flight11 29/02/2024 05:57 ASRT flight8 29/02/2024 06:01 ATOT_ALDT flight12 29/02/2024 06:03 ASRT flight9 29/02/2024 06:04 ATOT_ALDT flight10 29/02/2024 06:08 ATOT_ALDT flight11 29/02/2024 06:10 ATOT_ALDT flight12 29/02/2024 06:14 ATOT_ALDT flight13 29/02/2024 06:19 ATOT_ALDT So if you now add that streamstatsed count, you'd get this: displayed_flyt_no _time event times_busy flight10 29/02/2024 05:49 ASRT 0 flight7 29/02/2024 05:51 ATOT_ALDT 1 flight11 29/02/2024 05:57 ASRT 1 flight8 29/02/2024 06:01 ATOT_ALDT 2 flight12 29/02/2024 06:03 ASRT 2 flight9 29/02/2024 06:04 ATOT_ALDT 3 flight10 29/02/2024 06:08 ATOT_ALDT 4 flight11 29/02/2024 06:10 ATOT_ALDT 5 flight12 29/02/2024 06:14 ATOT_ALDT 6 flight13 29/02/2024 06:19 ATOT_ALDT 7 So now you group the flights with the stats and get displayed_flyt_no states busy flight7 ATOT_ALDT 1 flight8 ATOT_ALDT 2 flight9 ATOT_ALDT 3 flight10 ASRT,ATOT_ALDT 0,4 flight11 ASRT,ATOT_ALDT 1,5 flight12 ASRT,ATOT_ALDT 2,6 flight13 ATOT_ALDT 7 So we now know that flights 7,8,9 and 13 only landed (I assume - they didn't ASRT so they only occupied the runway becaues of ATOT_ALDT). And we're not interested in those because they didn't wait in queue. So we're filtering them out by our "where" command and we're left with just displayed_flyt_no states busy flight10 ASRT,ATOT_ALDT 0,4 flight11 ASRT,ATOT_ALDT 1,5 flight12 ASRT,ATOT_ALDT 2,6 Now if we calculate max(busy)-min(busy)-1, we'll see that all those flights waited in a queue of length 3. And it's the same values as you have in your table.
Thank you for the reply. RE: "swap" method, yeah I thought about that and also share your apprehension. RE: " deploy new component", yeah I agree with that method for idxc peers and shc members... ... See more...
Thank you for the reply. RE: "swap" method, yeah I thought about that and also share your apprehension. RE: " deploy new component", yeah I agree with that method for idxc peers and shc members... But check this out... please LMK what you think Per Splunk docs >>> docs.splunk.com/Documentation/Splunk/9.2.0/Installation/MigrateaSplunkinstance "Migrate a Splunk Enterprise instance from one physical machine to another" "When to migrate" "Your Splunk Enterprise installation is on an operating system that either your organization or Splunk no longer supports, and you want to move it to an operating system that does have support." "How to migrate" The Steps say >>> Stop Splunk Enterprise services on the host from which you want to migrate. Copy the entire contents of the $SPLUNK_HOME directory from the old host to the new host. Copying this directory also copies the mongo subdirectory. Install Splunk Enterprise on the new host. The way I read this is... 1) Stop Splunk on the old box 2) tar up the /opt/splunk on the old box e.g. > tar -cjvf $(hostname)_splunk-og.tar.bz2 --exclude=./var/* --exclude=./$(hostname)*bz2 ./ 3) move and untar the .bz2 file on the new box in /opt  e.g. > tar -xjvf <hostname>_splunk-og.tar.bz2 -C /opt/splunk 4 ) install a clean copy (downloaded from Splunk) of the same version of Splunk on top of the old copies Apparently someone that documented this believes this is the way to go... What do you think? RE: my --exclude=./var/* that is for boxes that don't contain indexed data RE: my --exclude=./$(hostname)*bz2./  this is because I am running the tar from /opt/splunk dir Thank you
Hi, In a table, I am looking to get a field value from previous available value in case its null. In below screenshot, dataset is basically  queries pulling out some DB records.  for same query... See more...
Hi, In a table, I am looking to get a field value from previous available value in case its null. In below screenshot, dataset is basically  queries pulling out some DB records.  for same query events are spiltted in multiple events. (Incremental records) Issue is query is not populating in each events. (Just 1st event)  I am trying to fill the query value from 1st event to all subsequent I have used streamstats which is almost working but skipping for some use case. | streamstats current=f last(query) as previous_query reset_before="("match(query,\"\")")" by temp_field   May  be if we can logic to assign value where previous record is < current record and query is empty. previous records | streamstats current=f window=1 last(records) as pre_records reset_before="("match(query,\"\")")" by temp_field
Hi, Have integrate logs to Splunk via S3 SQS successful. Have asked CM team dumps the logs to client S3 bucket from there on , we are pulling them to Splunk. thank you all your inputs.
got it thank you.
Hi,  We tried to integrate beyontrust privileged remote support app integration to splunk for gettting the logs from BT PRA as per the documentation beyondtrust.  https://www.beyondtrust.com/docs/r... See more...
Hi,  We tried to integrate beyontrust privileged remote support app integration to splunk for gettting the logs from BT PRA as per the documentation beyondtrust.  https://www.beyondtrust.com/docs/remote-support/how-to/integrations/splunk/configure-splunk.htm Documentation has some 4-5 steps to configure in data inputs 1.Input name . 2.Client ID & token received from the beyond trust post once they enable the api. 3.PRA site id. 5.index name 6.source type  we have provided these details but we are unable to see the logs coming to Splunk. However, when check in index=_internal able to beyondtrust config logs in Splunk but not the actual event logs from beyond trust pra. Could you please kindly let me know if anyone has integrated BT PRA and if any troubleshooting steps/guidance to confirm that there is issue from Splunk side so that. will ask the BT team to further check from there end. This app is  not Splunk developed.no support from Splunk.  Thank you & much appreciated for your responses.  
OK. Something is definitely weird with your setup then, I did a quick test on my home lab. # cat props.conf [routeros] TRANSFORMS-add_source_based_field = add_source_based_field # cat transfo... See more...
OK. Something is definitely weird with your setup then, I did a quick test on my home lab. # cat props.conf [routeros] TRANSFORMS-add_source_based_field = add_source_based_field # cat transforms.conf [add_source_based_field] REGEX = udp:(.*) FORMAT = source_source_port::$1 WRITE_META = true SOURCE_KEY = MetaData:Source REPEAT_MATCH = false As you can see, for events coming from my mikrotik router it calls a transform which adds a field called source_source_port containing the port number extracted from the source field. And it works. So the mechanism is sound and the configuration is pretty OK. Now the question is why it doesn't work for you. One thing which can _sometimes_ be tricky here (but it's highly unlikely that you have that problem with all your sourcetypes) is that it might be not that obvious which config is effective when your sourcetype is recast on ingestion because props and transforms are applied only for the original sourcetype even if it's changed during the processing in the pipeline (I think we already talked about it :-)) . But as far as I recognize some of your sourcetypes at least some of them are not recast (pps_log for sure, for example)
Lets take flight16 in my example. It had an ASRT timestamp of 06:19 and an ATOT_ALDT of 06:32. Flight15 had an ATOT_ALDT of 06:19, flight14 06:29 and flight15 06:31, so that's 3 flights that used the... See more...
Lets take flight16 in my example. It had an ASRT timestamp of 06:19 and an ATOT_ALDT of 06:32. Flight15 had an ATOT_ALDT of 06:19, flight14 06:29 and flight15 06:31, so that's 3 flights that used the runway between the ASRT and ATOT of flight16.
The Cloud Monitoring Console may help with that.  Go to License usage->Workload then scroll down to "SVC usage per hour by top 10 app" and change the "View by" dropdown to "estimated SVC".
Thanks @PickleRick, however, that search doesn't work for me.  Not sure if I've made it clear enough.  So for every single departing flight in the table (DepOrArr=D), I need to count the total ... See more...
Thanks @PickleRick, however, that search doesn't work for me.  Not sure if I've made it clear enough.  So for every single departing flight in the table (DepOrArr=D), I need to count the total of other flights who's ATOT_ALDT time was between the ASRT timestamp and ATOT_ALDT timestamp of that flight. So if Flight1234 has and ASRT of 09:00 and an ATOT_ALDT of 09:15, how many other flights in the list had an ATOT_ALDT timestamp between those 2 times. And then so on for the next flight...etc etc. What makes this more complicated is a flight can have an ASRT timestamp after another flight but still have an ATOT_ALDT timestamp before.