All Topics

Top

All Topics

Really struggling with this one, so looking for a hero to come along with a solution! I have an index of flight data. Each departing flight has a timestamp for when the pilot calls up to the contr... See more...
Really struggling with this one, so looking for a hero to come along with a solution! I have an index of flight data. Each departing flight has a timestamp for when the pilot calls up to the control tower to request to push back, this field is called ASRT (Actual Start Request Time). Each flight also has a time that it uses the runway, this is called ATOT_ALDT (Actual Take Off Time/Actual Landing Time). What I really need to calculate, is for each departing flight, how many over flights used the runway (had an ATOT_ALDT) between when the flight calls up (ASRT) and then uses the runway itself (ATOT_ALDT). This is to work out what the runway queue was like for each departing aircraft. I have tried using the concurrency command, however, this doesn't return the desired results as it only shows the number flights that started before and not the ones that started after. We may have a situation where an aircraft calls up after one before but then departs before. And this doesn't capture that. So I've found an approach that in theory should work. I ran an eventstats that lists the take off/landing time of every flight, so then I can mvexpand that and run an eval across each line. However, multi-value fields have a limit of 100, and there can be up to 275 flights in the time period I need to check. Can anyone else think of a way of achieving this? My code is below: REC_UPD_TM = the time the record was updated (this index uses the flights scheduled departure time as _time, so we need to find the latest record for each flight) displayed_flyt_no = The flight number e.g EZY1234 DepOrArr = Was the flight a departure or an arrival.   index=flights | eval _time = strptime(REC_UPD_TM."Z","%Y-%m-%d %H:%M:%S%Z") | dedup AODBUniqueField sortby - _time | fields AODBUniqueField DepOrArr displayed_flyt_no ASRT ATOT_ALDT | sort ATOT_ALDT | where isnotnull(ATOT_ALDT) | eval asrt_epoch = strptime(ASRT,"%Y-%m-%d %H:%M:%S"), runway_epoch = strptime(ATOT_ALDT,"%Y-%m-%d %H:%M:%S") | table DepOrArr displayed_flyt_no ASRT asrt_epoch ATOT_ALDT runway_epoch | eventstats list(runway_epoch) as runway_usage | search DepOrArr="D" | mvexpand runway_usage | eval queue = if(runway_usage>asrt_epoch AND runway_usage<runway_epoch,1,0) | stats sum(queue) as queue by displayed_flyt_no    
how we can colour the text as green for status as running and red for stopped for single value visualization in dashboard studio splunk. My Code is below :- "ds_B6p8HEE0": {             "type": "... See more...
how we can colour the text as green for status as running and red for stopped for single value visualization in dashboard studio splunk. My Code is below :- "ds_B6p8HEE0": {             "type": "ds.chain",             "options": {                 "enableSmartSources": true,                 "extend": "ds_JRxFx0K2",                 "query": "| eval status = if(OPEN_MODE=\"READ WRITE\",\"running\",\"stopped\") | stats latest(status)"             },             "name": "oracle status"
Hi All, i want a query to check and fire an alert when , there are no logs from a server past 30 min. For example we have different isnatnce running on a host and i want an alert when there are no ... See more...
Hi All, i want a query to check and fire an alert when , there are no logs from a server past 30 min. For example we have different isnatnce running on a host and i want an alert when there are no logs coming from serevr in past 30 min(because server instances are not running) .So we we dont see any logs from server past 30 min and alert shoul notfiy that server instances are stopped.Please help. Logs below event. 3/1/24 12:26:07.000 PM   www 89589 0 0.0 00:00:02 0.1 51784 2151496 ? S 35:31 httpd -d_/sys_apps_01/apache/server20Cent/versions/server2.4.56_-f_/sys_apps_01/apache/server20Cent/conf/MTF.AEM.conf host = www2stl52 source = ps sourcetype = ps
Is splunk forwarder agent 9.2.0.1 supported on Amazon Linux 2023 x86/arm OS using RPM file.  Got error while starting splunk service.  tcp_conn_open_afux ossocket_connect failed with No such file o... See more...
Is splunk forwarder agent 9.2.0.1 supported on Amazon Linux 2023 x86/arm OS using RPM file.  Got error while starting splunk service.  tcp_conn_open_afux ossocket_connect failed with No such file or directory tcp_conn_open_afux ossocket_connect failed with No such file or directory tcp_conn_open_afux ossocket_connect failed with No such file or directory
Hi, Why my CIDR matching in not following the lookup content? Query i used is as below: | makeresults | eval ip="10.10.10.10" | lookup testip ip OUTPUTNEW description Result should look... See more...
Hi, Why my CIDR matching in not following the lookup content? Query i used is as below: | makeresults | eval ip="10.10.10.10" | lookup testip ip OUTPUTNEW description Result should look like this: ip Description 10.10.10.10 New   But the real output look like this: ip Description 10.10.10.10 New In Progress Closed   I have check my lookup and its clearly state the Description for IP Range 10.10.10.10/27 is "New". Please help and thanks!  
I am getting an error when using the following regex (?<=on\s)(.*)(?=\sby Firewall Settings) The error is "Error in 'rex' command: regex="(?<=on\s)(.*)(?<HostName>.*)(?=\sby Firewall Settings)"... See more...
I am getting an error when using the following regex (?<=on\s)(.*)(?=\sby Firewall Settings) The error is "Error in 'rex' command: regex="(?<=on\s)(.*)(?<HostName>.*)(?=\sby Firewall Settings)" has exceeded configured match_limit, consider raising the value in limits.conf." Is there a better way to do this,  I am trying to find all text between "on " and " by Firewall Settings.  It works in regex101.com, but I get that error in Splunk.   TIA!  
Hello all, I'm bringing data into Splunk as json but it coming bold text in front that throw off the json.  Any suggestion on regx to remove the bold text? <165>Feb 29 19:06:30 server01 darktra... See more...
Hello all, I'm bringing data into Splunk as json but it coming bold text in front that throw off the json.  Any suggestion on regx to remove the bold text? <165>Feb 29 19:06:30 server01 darktrace {"hostname":"ss-26138-03","label":"","ip_address":"10.21.32.88","child_id":null,"name":"age_alert-inaccessible_ui","priority":61,"priority_level":"high","alert_name":"Datatrace / Email: Inaccessible UI","status":"Resolved","message":"The UI is inaccessible, this could be the result of a misconfiguration or network error.","last_updated":1709233590.814423,"last_updated_status":1709233590.814423,"acknowledge_time":null,"acknowledge_timeout":null,"uuid":"1111114d-6e72-4029-8ac2-5d051be02ad5","url":"https://server01/sysstatus?alert=1481514d-6e72-4029-8ac2-5d051be02ad5","creationTime":1709233590814}  
What I am trying to write is some SPL code that will identify log events that only have a "Starting" event with no "Completed" event.  By a specific Job Name extracted from each log event that are in... See more...
What I am trying to write is some SPL code that will identify log events that only have a "Starting" event with no "Completed" event.  By a specific Job Name extracted from each log event that are in the same index & sourcetype ? A Job is still 'running' if it only has a "Start" event with no "Completed" event. If my starting query is: index=anIndex sourcetype=aSourcetype (jobName1 OR jobName2 OR jobName3) AND "Starting" | rex field=_raw "Batch::(?<aJobName1>[^\s]*)" | stats count AS aCount1 by aJobName1 Then I only want to keep log events that have no "Completed" event from the same index and sourcetype: index=anIndex sourcetype=aSourcetype (jobName1 OR jobName2 OR jobName3) AND "Completed" | rex field=_raw "Batch::(?<aJobName2>[^\s]*)" | stats count AS aCount2 by aJobName2 I have tried using: where isnull(aCount2) but I used appendcols but stats is removing _raw data ? for the rest of my code... How would I go about just getting those log events (_raw) for jobs that are only "Started" I might be overthinking this but am struggling...
How do I fix replication bundle on Splunk Cloud SH?
Dear SPLUNKos I need to create a time chart as per the below Run one “grand total” search Run second search which is a dedup of the first search. Subtract the difference and timechart only the d... See more...
Dear SPLUNKos I need to create a time chart as per the below Run one “grand total” search Run second search which is a dedup of the first search. Subtract the difference and timechart only the difference. I have got to the point below which gives me a table of data but I cannot get this to chart : Mr SPLUNK in my organisation tells me this cannot be done which is  borne out by the documentation on the timechart command which indicates it can only reference field data not calculated data . Is there a way? <SEARCH-GRANDTOTAL> | stats count as Grandtotal | appendcols [ <SEARCH-2> | stats count as TotalDeDup ] | eval diff= Grandtotal - TotalDeDup
I have a query that gets a list of destination ips per source ip. I also want to add a column for the associated domain name per destination ip. The query I have to get destination ips per source ip ... See more...
I have a query that gets a list of destination ips per source ip. I also want to add a column for the associated domain name per destination ip. The query I have to get destination ips per source ip is:      index=network | stats values(dest_ip) by src_ip     I am not wanting to use eval to combine the values of dest_ip and domain into one field, and I tried mvappend but I am unable to achieve the result I want.  I tried |stats values(dest_ip) values(domain) by src_ip, but the dest_ip and domain columns appear to be independent of each other. What I am looking for is below:  src_ip domain_ips domain I just need the domain name to be "connected" with the domain_ip
Can this app please be updated to make it cloud compatible, as well as to show it's compatible with v9 of splunk? There's no reason I can see it can't be, other than it just needing a quick update of... See more...
Can this app please be updated to make it cloud compatible, as well as to show it's compatible with v9 of splunk? There's no reason I can see it can't be, other than it just needing a quick update of the config. I haven't run appinspect on it yet, though, so possibly that is what is stopping this.
I have a distributed deployment at version 9.0.4.1 Everything in running on RHEL 7 and the system/server team does not want to do in place upgrades to RHEL 9.  I have been tasked to migrate each nod... See more...
I have a distributed deployment at version 9.0.4.1 Everything in running on RHEL 7 and the system/server team does not want to do in place upgrades to RHEL 9.  I have been tasked to migrate each node to a new replacement server (which will be renamed / IP- addressed to match the existing).   From what I have read this is possible, but I have a few questions. Lets consider I start with standalone nodes, like a SHC-deployer, Monitoring Console, License Manager... These are the general steps I have gathered 1 Install Splunk (same version) on the new server 2 Stop Splunk on the old server  3 Copy old configs to new server ?? <<< which configs? is there a check list documented somewhere 4 Start new Splunk server and verify  I could go thru each directory copying configs, but any advice to expedite this step is appreciated. Thank you
Hello, I use Microsoft's Visual Studio Code as code locker for my spl, xml, and json Splunk code. Does anyone have  experience running spl code from VSC? I have the Live Server extension installed a... See more...
Hello, I use Microsoft's Visual Studio Code as code locker for my spl, xml, and json Splunk code. Does anyone have  experience running spl code from VSC? I have the Live Server extension installed and enabled. However, it opens into directory listing within Chrome. When I drilldown to the spl file instead of running the code it downloads the file. Thanks and God bless, Genesius
We have logs in two different indexes. There is no common field other than the _time . The  timestamp of the events in second index is about 5 seconds further than the events in the first index. How ... See more...
We have logs in two different indexes. There is no common field other than the _time . The  timestamp of the events in second index is about 5 seconds further than the events in the first index. How do in  I need to join these two indexes based on the date and the hour and try to match inside of minute? Thanks,
I'm working on building a dashboard for monitoring a system and I would like to have a dropdown input which allows me to switch between different environments. Environments are specified using severa... See more...
I'm working on building a dashboard for monitoring a system and I would like to have a dropdown input which allows me to switch between different environments. Environments are specified using several indices, such as sys-be-dev, sys-be-stage, sys-be-prod. So a query will look something like `namespace::sys-be-prod | search ...` for prod, and the namespace index will change for other environments. I've added an input to my dashboard named NamespaceInput with values like sys-be-dev, sys-be-stage, sys-be-prod. Unfortunately doing `namespace=$NamespaceInput$` and `namespace::$NamespaceInput$` don't work. I've tried various ways of specifying the namespace index using the token but none of them function correctly. It seems like only a hard-coded `namespace::sys-be-prod` sort of specifier works for this type of index. Any tips on how I might make use of a dashboard input in order to switch which index is used in a base query? Note that I'm using the dashboard studio. Perhaps there's a way of using chained queries and making them conditional based on the value of the NamespaceInput token value?   Thank you!
I am trying to use parameter into the search using IN condition.  Query is retuning results if I put data directly into the search but my dashboard logic require to use parameter .  ........ | ev... See more...
I am trying to use parameter into the search using IN condition.  Query is retuning results if I put data directly into the search but my dashboard logic require to use parameter .  ........ | eval tasks = task1,task2,task3 | search NAME IN (tasks)
hi, need some help, i have this format type but it seems the word 'up' is not matching for whatever reason. there is no spaces or anything in the field value.  the field value is extracted using ... See more...
hi, need some help, i have this format type but it seems the word 'up' is not matching for whatever reason. there is no spaces or anything in the field value.  the field value is extracted using 'rex'. i have this working in other fields, but this one got me stuck.  any help will be appreciated.    <format type="color" field="state"> <colorPalette type="expression">if (value == "up","#Green", "#Yellow")</colorPalette> </format>    
i at all, I'm ingesting data using HEC in a distributed infratructure using a Load Balancer to distribute traffic from many senders between our Heavy Forwarders. Now, I need to identify the sender ... See more...
i at all, I'm ingesting data using HEC in a distributed infratructure using a Load Balancer to distribute traffic from many senders between our Heavy Forwarders. Now, I need to identify the sender of each event, is there a meta-data that identify the hostname and IP address of each sender? I didn't find it in HEC documentation. Thank you for your support. Ciao. Giuseppe
I have a few servers that have universal forwarders that need to be updated where I can send the Application data to one Splunk environment and the OS logs to another environment.  I believe this is ... See more...
I have a few servers that have universal forwarders that need to be updated where I can send the Application data to one Splunk environment and the OS logs to another environment.  I believe this is possible but just want to know who to get this done.  I'm assuming the inputs.conf and outputs.conf need to be updated.  Just looking for guidance.