All Topics

Top

All Topics

Hello, I'm trying to search for my detectors based on the tags I gave them. I'm using terraform to create the charts and detectors. I gave the detectors tags but when I try to search for them in the ... See more...
Hello, I'm trying to search for my detectors based on the tags I gave them. I'm using terraform to create the charts and detectors. I gave the detectors tags but when I try to search for them in the signalfx UI nothing comes back. I see that sf_tags is something I can filter for but none of my tags work there. I also can't see any tag information on the detector anywhere.  Any guidance on how to get a list of my detectors based off of a tag would be helpful
We want to provide few capabilities to the team Presently team has a capability to create email alert. What capabilities need to be given to create auto cut alerts.
How do I get slurm log content into Splunk?
Hello I am getting this warning and the sample data looks like this 02-22-2012 17:01:12.280 +0000 WARN DateParserVerbose - A possible timestamp match (Wed Feb 22 17:01:12 2012) is outside of t... See more...
Hello I am getting this warning and the sample data looks like this 02-22-2012 17:01:12.280 +0000 WARN DateParserVerbose - A possible timestamp match (Wed Feb 22 17:01:12 2012) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context="source::/var/opt/jira/log/atlassian-jira-security.log|host::syn|atlassian_jira|remoteport::47375"  
I am working on a query that lists hosts and their corresponding instances. My results look like the example below.  I want to only remove the 111222 host when the instance is R: from my results. I ... See more...
I am working on a query that lists hosts and their corresponding instances. My results look like the example below.  I want to only remove the 111222 host when the instance is R: from my results. I am not certain on how to do this within my query.  Host Instance 111222 A: 111222 C: 111222 R: 333444 A: 333444 C: 333444 R:
I created some reports to populate a dashboard using some reports, but when I have two or 3 reports populating the dashboard everything goes fine but when I have many it doesn't work it doesn't load ... See more...
I created some reports to populate a dashboard using some reports, but when I have two or 3 reports populating the dashboard everything goes fine but when I have many it doesn't work it doesn't load the reports in the dashboard panels. I used it like this:   { "type": "ds.savedSearch", "options": { "ref": "test_consumo_licenca_index_ultimos_30_dias_top_5" } }   Remembering that it loads when there are few reports, but when there are many, it doesn't load. It stays like this forever, the report has already been executed, it just needs to load the information
Using splunkforwarder-9.0.2-17e00c557dc1.x86_64 on forwarder linux box Using splunk-9.0.4-de405f4a7979.x86_64 on indexer node. From forwarder node I am able to telnet indexer node fine. On forward... See more...
Using splunkforwarder-9.0.2-17e00c557dc1.x86_64 on forwarder linux box Using splunk-9.0.4-de405f4a7979.x86_64 on indexer node. From forwarder node I am able to telnet indexer node fine. On forwarder node splunkd.log, I see below error for s2s negotiation failed.   ERROR AutoLoadBalancedConnectionStrategy [25021 TcpOutEloop] - s2s negotiation failed. response='NULL' ERROR TcpOutputFd [25021 TcpOutEloop] - s2s negotiation failed. response='NULL'  
We have both Cisco ASA and FTD firewalls.  The ASA is parsing fine where the appropriate fields are extracted.  As for the FTD logs, I don't get the same treatment for the data.  I downloaded the Cis... See more...
We have both Cisco ASA and FTD firewalls.  The ASA is parsing fine where the appropriate fields are extracted.  As for the FTD logs, I don't get the same treatment for the data.  I downloaded the Cisco Firepower Threat Defense FTD sourcetype app and installed it on the search heads because I only had the Splunk Add-on for Cisco ASA.  That didn't change anything.     Mar 1 18:44:20 USxx-xx-FW01 : %ASA-6-302014: Teardown TCP connection 3111698504 for wan:208.87.237.180/8082 to cm-data:12.11.60.44/60113 duration 0:00:00 bytes 327 TCP FINs from wan   Mar 1 13:45:09 MXxx-EG-FTD01 : %FTD-6-302014: Teardown TCP connection 125127915 for CTL_Internet:194.26.135.230/41903 to CTL_Internet:123.243.123.218/33445 duration 0:00:30 bytes 0 Failover primary closed   As you can see the msgs are identical for both FWs but the ASA has lots of interest fields where the FTD only has a few.
In some cases, I have Splunk installed on server builds where /opt/splunk is a mount point and separate disk/LVM.  That is /opt/splunk dir is on a different disk than the OS. Would anyone know if th... See more...
In some cases, I have Splunk installed on server builds where /opt/splunk is a mount point and separate disk/LVM.  That is /opt/splunk dir is on a different disk than the OS. Would anyone know if there are any problems, detaching the LVM from a RHEL 7 box and then attaching the LVM to a RHEL 9 box? Thank you
I am having trouble clearing a STIG that requires file permissions, ownership, and group membership of system files and commands match the vendor values. It is hitting on pretty much all of the splun... See more...
I am having trouble clearing a STIG that requires file permissions, ownership, and group membership of system files and commands match the vendor values. It is hitting on pretty much all of the splunk files, but I am not sure what it means when it says it has to match the vendor values. Any help is much appreciated!
What are some reasons why a UF wouldn't monitor a windows file assuming there is nothing wrong with any configs and the virtual account has full access to the file I'm trying to monitor?
I have two SPL #1  index=index1 service IN (22, 53, 80, 8080) | table src_ip #2 index=index2 dev_ip IN ( value from #1 src_ip) |table dev_ip, OS_Type ---------------------- I try to... See more...
I have two SPL #1  index=index1 service IN (22, 53, 80, 8080) | table src_ip #2 index=index2 dev_ip IN ( value from #1 src_ip) |table dev_ip, OS_Type ---------------------- I try to create a single SPL with sub search I.e.  index=index2 dev_ip IN ([search index=index1 service IN (22, 53, 80, 8080) | table src_ip]) |table dev_ip, OS_Type I get an error message Error in 'search' command: Unable to parse the search: Right hand side of IN must be a collection of literals. '(src_ip = "130.197.32.155")' is not a literal. Thank you.
Has anyone run into the interesting effect that isnum() thinks that "NaN" is a number? So isnum("NaN") is true "NaN" * 2 = "NaN" but tonumber("NaN") is NULL Are there any other odd, uh, num... See more...
Has anyone run into the interesting effect that isnum() thinks that "NaN" is a number? So isnum("NaN") is true "NaN" * 2 = "NaN" but tonumber("NaN") is NULL Are there any other odd, uh, numbers besides Not a Number? I made up the following silly query as an illustration:       | makeresults | eval num="blubb;NaN;100;0.5;0,5;-0;NULL;" | makemv delim=";" allowempty=true num | mvexpand num | eval isnum=if(isnum(num),"true","false") | eval isint=if(isint(num),"true","false") | eval isnull=if(isnull(num),"true","false") | eval calcnum=num*2 | eval isnumcalcnum=if(isnum(calcnum),"true","false") | eval isnullcalcnum=if(isnull(calcnum),"true","false") | eval numnum=tonumber(num) | eval isnumnum=if(isnum(numnum),"true","false") | eval isnullnumnum=if(isnull(numnum),"true","false") | table num,isnum,isint,isnull,calcnum,isnumcalcnum,isnullcalcnum,numnum,isnumnum,isnullnumnum          which results in num isnum isint isnull calcnum isnumcalcnum isnullcalcnum numnum isnumnum isnullnumnum                     blubb false false false   false true   false true NaN true false false NaN true false   false true 100 true true false 200 true false 100 true false 0.5 true false false 1 true false 0.5 true false 0,5 false false false   false true   false true -0 true true false -0 true false -0 true false NULL false false false   false true   false true   false false false   false true   false true (Post moved over from the Splunk Enterprise group.)
Hi, In a table, I am looking to get a field value from previous available value in case its null. In below screenshot, dataset is basically  queries pulling out some DB records.  for same query... See more...
Hi, In a table, I am looking to get a field value from previous available value in case its null. In below screenshot, dataset is basically  queries pulling out some DB records.  for same query events are spiltted in multiple events. (Incremental records) Issue is query is not populating in each events. (Just 1st event)  I am trying to fill the query value from 1st event to all subsequent I have used streamstats which is almost working but skipping for some use case. | streamstats current=f last(query) as previous_query reset_before="("match(query,\"\")")" by temp_field   May  be if we can logic to assign value where previous record is < current record and query is empty. previous records | streamstats current=f window=1 last(records) as pre_records reset_before="("match(query,\"\")")" by temp_field
Hi,  We tried to integrate beyontrust privileged remote support app integration to splunk for gettting the logs from BT PRA as per the documentation beyondtrust.  https://www.beyondtrust.com/docs/r... See more...
Hi,  We tried to integrate beyontrust privileged remote support app integration to splunk for gettting the logs from BT PRA as per the documentation beyondtrust.  https://www.beyondtrust.com/docs/remote-support/how-to/integrations/splunk/configure-splunk.htm Documentation has some 4-5 steps to configure in data inputs 1.Input name . 2.Client ID & token received from the beyond trust post once they enable the api. 3.PRA site id. 5.index name 6.source type  we have provided these details but we are unable to see the logs coming to Splunk. However, when check in index=_internal able to beyondtrust config logs in Splunk but not the actual event logs from beyond trust pra. Could you please kindly let me know if anyone has integrated BT PRA and if any troubleshooting steps/guidance to confirm that there is issue from Splunk side so that. will ask the BT team to further check from there end. This app is  not Splunk developed.no support from Splunk.  Thank you & much appreciated for your responses.  
Hey all, I have question regarding license enforcement. We currently have a "50 GB (No enforcement) Enterprise Term-license" and exceeding it since 10 days. I already read the "License Enforcement ... See more...
Hey all, I have question regarding license enforcement. We currently have a "50 GB (No enforcement) Enterprise Term-license" and exceeding it since 10 days. I already read the "License Enforcement FAQ" and other posts but its no 100% clear to me if there will be enforcement in our environment after 45 warnings in 60 days. I understand that below 100 GB there is conditional enforcement but is this is also the case with the "no enforcement" key we have? What will happen to our environment / search function if we exceed the volume more than 45 times in 60 days? Best regards, Cimey
how get svc usage for each installed apps and add on in splunk cloud
Hi,   I have multiple events with the following JSON object.   { "timeStamp": "2024-02-29T10:00:00.673Z", "collectionIntervalInMinutes": "1", "node": "plgiasrtfing001", "inboundErrorSummary": ... See more...
Hi,   I have multiple events with the following JSON object.   { "timeStamp": "2024-02-29T10:00:00.673Z", "collectionIntervalInMinutes": "1", "node": "plgiasrtfing001", "inboundErrorSummary": [ { "name": "400BadRequestMalformedHeader", "value": 1 }, { "name": "501NotImplementedMethod", "value": 2 }, { "name": "otherErrorResponses", "value": 1 } ] }     I am trying to extract the name/values from the inboundErrorSummary array and display the sum total of all the values of the same name and plot them by time. So the output should be something like             Date 400BadRequestMalformedHeader 501NotImplementedMethod otherErrorResponses 2024-02-29T10:00:00 1 2 1 2024-02-29T11:00:00 10 40 50   Even a total count of each name field should also work. I am quite new to splunk queries, so hope someone can help and also explain the steps on how its done. Thanks in advance.
Really struggling with this one, so looking for a hero to come along with a solution! I have an index of flight data. Each departing flight has a timestamp for when the pilot calls up to the contr... See more...
Really struggling with this one, so looking for a hero to come along with a solution! I have an index of flight data. Each departing flight has a timestamp for when the pilot calls up to the control tower to request to push back, this field is called ASRT (Actual Start Request Time). Each flight also has a time that it uses the runway, this is called ATOT_ALDT (Actual Take Off Time/Actual Landing Time). What I really need to calculate, is for each departing flight, how many over flights used the runway (had an ATOT_ALDT) between when the flight calls up (ASRT) and then uses the runway itself (ATOT_ALDT). This is to work out what the runway queue was like for each departing aircraft. I have tried using the concurrency command, however, this doesn't return the desired results as it only shows the number flights that started before and not the ones that started after. We may have a situation where an aircraft calls up after one before but then departs before. And this doesn't capture that. So I've found an approach that in theory should work. I ran an eventstats that lists the take off/landing time of every flight, so then I can mvexpand that and run an eval across each line. However, multi-value fields have a limit of 100, and there can be up to 275 flights in the time period I need to check. Can anyone else think of a way of achieving this? My code is below: REC_UPD_TM = the time the record was updated (this index uses the flights scheduled departure time as _time, so we need to find the latest record for each flight) displayed_flyt_no = The flight number e.g EZY1234 DepOrArr = Was the flight a departure or an arrival.   index=flights | eval _time = strptime(REC_UPD_TM."Z","%Y-%m-%d %H:%M:%S%Z") | dedup AODBUniqueField sortby - _time | fields AODBUniqueField DepOrArr displayed_flyt_no ASRT ATOT_ALDT | sort ATOT_ALDT | where isnotnull(ATOT_ALDT) | eval asrt_epoch = strptime(ASRT,"%Y-%m-%d %H:%M:%S"), runway_epoch = strptime(ATOT_ALDT,"%Y-%m-%d %H:%M:%S") | table DepOrArr displayed_flyt_no ASRT asrt_epoch ATOT_ALDT runway_epoch | eventstats list(runway_epoch) as runway_usage | search DepOrArr="D" | mvexpand runway_usage | eval queue = if(runway_usage>asrt_epoch AND runway_usage<runway_epoch,1,0) | stats sum(queue) as queue by displayed_flyt_no    
how we can colour the text as green for status as running and red for stopped for single value visualization in dashboard studio splunk. My Code is below :- "ds_B6p8HEE0": {             "type": "... See more...
how we can colour the text as green for status as running and red for stopped for single value visualization in dashboard studio splunk. My Code is below :- "ds_B6p8HEE0": {             "type": "ds.chain",             "options": {                 "enableSmartSources": true,                 "extend": "ds_JRxFx0K2",                 "query": "| eval status = if(OPEN_MODE=\"READ WRITE\",\"running\",\"stopped\") | stats latest(status)"             },             "name": "oracle status"