All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Post the output of: | inputlookup testip If its to long post part with IP 10.10.10.x
Hi as @jotne has used %z as time zone information on props.conf you should add also this to your regex and then if/when needed use strptime and strftime functions to convert that field as needed. On... See more...
Hi as @jotne has used %z as time zone information on props.conf you should add also this to your regex and then if/when needed use strptime and strftime functions to convert that field as needed. On ingestion time that happened automatically with correct TIME* definitions. r. Ismo
As I have used replace on those examples, you can use it same way. On those cases I have take some part of source (e.g. yyyymmmdd from file path) and use it as a field value. Basically replace is ... See more...
As I have used replace on those examples, you can use it same way. On those cases I have take some part of source (e.g. yyyymmmdd from file path) and use it as a field value. Basically replace is a one way to use regex on splunk.
Thank you so much for your great help
I guess the first 9 days in every month has just one digit.  This should do:   ,\s(\d\d\.\d\d\.\d\d\s\w+\s+\d+\w+\d\d)\s   Added a + behind a space since it may be more than one space TIME_FORMA... See more...
I guess the first 9 days in every month has just one digit.  This should do:   ,\s(\d\d\.\d\d\.\d\d\s\w+\s+\d+\w+\d\d)\s   Added a + behind a space since it may be more than one space TIME_FORMAT = %z, %T %a %e%b%y Change a %d to %e
Hello @jotne , this is the regex: | rex field=_raw (?<date>\s(\d\d\.\d\d\.\d\d\s\w+\s\d+\w+\d\d)\s) 
Really struggling with this one, so looking for a hero to come along with a solution! I have an index of flight data. Each departing flight has a timestamp for when the pilot calls up to the contr... See more...
Really struggling with this one, so looking for a hero to come along with a solution! I have an index of flight data. Each departing flight has a timestamp for when the pilot calls up to the control tower to request to push back, this field is called ASRT (Actual Start Request Time). Each flight also has a time that it uses the runway, this is called ATOT_ALDT (Actual Take Off Time/Actual Landing Time). What I really need to calculate, is for each departing flight, how many over flights used the runway (had an ATOT_ALDT) between when the flight calls up (ASRT) and then uses the runway itself (ATOT_ALDT). This is to work out what the runway queue was like for each departing aircraft. I have tried using the concurrency command, however, this doesn't return the desired results as it only shows the number flights that started before and not the ones that started after. We may have a situation where an aircraft calls up after one before but then departs before. And this doesn't capture that. So I've found an approach that in theory should work. I ran an eventstats that lists the take off/landing time of every flight, so then I can mvexpand that and run an eval across each line. However, multi-value fields have a limit of 100, and there can be up to 275 flights in the time period I need to check. Can anyone else think of a way of achieving this? My code is below: REC_UPD_TM = the time the record was updated (this index uses the flights scheduled departure time as _time, so we need to find the latest record for each flight) displayed_flyt_no = The flight number e.g EZY1234 DepOrArr = Was the flight a departure or an arrival.   index=flights | eval _time = strptime(REC_UPD_TM."Z","%Y-%m-%d %H:%M:%S%Z") | dedup AODBUniqueField sortby - _time | fields AODBUniqueField DepOrArr displayed_flyt_no ASRT ATOT_ALDT | sort ATOT_ALDT | where isnotnull(ATOT_ALDT) | eval asrt_epoch = strptime(ASRT,"%Y-%m-%d %H:%M:%S"), runway_epoch = strptime(ATOT_ALDT,"%Y-%m-%d %H:%M:%S") | table DepOrArr displayed_flyt_no ASRT asrt_epoch ATOT_ALDT runway_epoch | eventstats list(runway_epoch) as runway_usage | search DepOrArr="D" | mvexpand runway_usage | eval queue = if(runway_usage>asrt_epoch AND runway_usage<runway_epoch,1,0) | stats sum(queue) as queue by displayed_flyt_no    
Hi @isoutamo , thank you for your hint, but using INGEST-EVAL, I can use an eval function, instead I need to use a regex to extract a field from another field. The correct way is the first I used b... See more...
Hi @isoutamo , thank you for your hint, but using INGEST-EVAL, I can use an eval function, instead I need to use a regex to extract a field from another field. The correct way is the first I used but there's something wrong and I don't understand what. Maybe the source field isn't still extracted when I try to extract with a regex a part of the path. Ciao. Giuseppe
hii,  it worked fine till February but for some reason the date is not getting extracted for March. Could you please help here want the date extracted for all the months..as the day goes by
Actually i am looking a query on a scenario where there are few istances on my hosts and it went down.Eventually the there were no logs within 2 hrs ..but we find after 2 hrs the logs are captured.So... See more...
Actually i am looking a query on a scenario where there are few istances on my hosts and it went down.Eventually the there were no logs within 2 hrs ..but we find after 2 hrs the logs are captured.So if we find no logs coming from server past 30 min, it should trigger an alert.
How about INGEST_EVAL? Here are some examples https://community.splunk.com/t5/Getting-Data-In/How-to-get-props-and-transforms-to-extract-time-from-source/m-p/644598/highlight/true#M109720 https://... See more...
How about INGEST_EVAL? Here are some examples https://community.splunk.com/t5/Getting-Data-In/How-to-get-props-and-transforms-to-extract-time-from-source/m-p/644598/highlight/true#M109720 https://community.splunk.com/t5/Getting-Data-In/How-to-apply-source-file-date-using-INGEST-as-Time/m-p/596865
how we can colour the text as green for status as running and red for stopped for single value visualization in dashboard studio splunk. My Code is below :- "ds_B6p8HEE0": {             "type": "... See more...
how we can colour the text as green for status as running and red for stopped for single value visualization in dashboard studio splunk. My Code is below :- "ds_B6p8HEE0": {             "type": "ds.chain",             "options": {                 "enableSmartSources": true,                 "extend": "ds_JRxFx0K2",                 "query": "| eval status = if(OPEN_MODE=\"READ WRITE\",\"running\",\"stopped\") | stats latest(status)"             },             "name": "oracle status"
Hi @PickleRick and @isoutamo , I also tried to solve the issue at search time, but there are many sourcetypes to associate this field, so I tried to create a field extraction to associate to source=... See more...
Hi @PickleRick and @isoutamo , I also tried to solve the issue at search time, but there are many sourcetypes to associate this field, so I tried to create a field extraction to associate to source=/var/log/remote/*, but it still doesn't run, probably because I cannot use the jolly char in a source for field extractions. Ciao. Giuseppe
@vk2 You can check the below document, Splunk universal forwarder is compatible with Linux OS which is having kernel 4.x or higher. If you have kernel 3.x , Splunk supports this platform and architec... See more...
@vk2 You can check the below document, Splunk universal forwarder is compatible with Linux OS which is having kernel 4.x or higher. If you have kernel 3.x , Splunk supports this platform and architecture, but might remove support in a future release.  https://docs.splunk.com/Documentation/Splunk/latest/Installation/Systemrequirements#Confirm_support_for_your_computing_platform 
Hi @PickleRick  and @isoutamo , Thank you for your yints, this is the new transforms.conf [relay_hostname] REGEX = (/var/log/remote/)([^/]+)(/.*) FORMAT = relay_hostname::$2 WRITE_META = true #DES... See more...
Hi @PickleRick  and @isoutamo , Thank you for your yints, this is the new transforms.conf [relay_hostname] REGEX = (/var/log/remote/)([^/]+)(/.*) FORMAT = relay_hostname::$2 WRITE_META = true #DEST_KEY = relay_hostname SOURCE_KEY = MetaData:Source REPEAT_MATCH = false I tried with your hints but they don't run, what could I try again? Ciao. Giuseppe
Only the file authentication.conf is used. The other one - with different extension - is not used. (which should be shown in the output of the commanf @isoutamo provided. Splunk only uses files with... See more...
Only the file authentication.conf is used. The other one - with different extension - is not used. (which should be shown in the output of the commanf @isoutamo provided. Splunk only uses files with the exact name needed, not with some additional prefixes or suffixes (but they can be in various directories from which they are "layered" onto each other according to the precedence rules. https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Wheretofindtheconfigurationfiles Why your authentication doesn't work though - we don't know. Not enough information. Look in your logs, look in your authentication server (LDAP?) logs. That might shed some light on the reasons.
I’m sorry, I should’ve been more specific. The files don’t have the same name, one is called authentication.conf and the other one authentication.conf_2. I have updated the binddnpassword in the auth... See more...
I’m sorry, I should’ve been more specific. The files don’t have the same name, one is called authentication.conf and the other one authentication.conf_2. I have updated the binddnpassword in the authentication.conf file, rebooted server, the password got hashed but I’m still not able to log into splunk. 
Actually i am looking a query on a scenario where there are few istances on my hosts and it went down.Eventually the there were no logs within 2 hrs ..but we find after 2 hrs the logs are captured.So... See more...
Actually i am looking a query on a scenario where there are few istances on my hosts and it went down.Eventually the there were no logs within 2 hrs ..but we find after 2 hrs the logs are captured.So if we find no logs coming from server past 30 min, it should trigger an alert.
Hi My old answer for this https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538062 r. Ismo
One old post about this https://community.splunk.com/t5/Alerting/How-to-detect-when-a-host-stop-sending-data-to-Splunk/m-p/563571