All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, so i started using this APP here. - SA-geodistance and its nice, I like it. However I don't understand how it pulls the old location and how I get the time the of the old location it is p... See more...
Hi all, so i started using this APP here. - SA-geodistance and its nice, I like it. However I don't understand how it pulls the old location and how I get the time the of the old location it is pulling. https://splunkbase.splunk.com/app/3232/ OR https://github.com/seunomosowon/SA-geodistance
When closing a notable event in SPLUNK Enterprise Security, there are typically the following fields available Status Change urgency Owner Description Summary/Notes Is there a way to ad... See more...
When closing a notable event in SPLUNK Enterprise Security, there are typically the following fields available Status Change urgency Owner Description Summary/Notes Is there a way to add a new field with a custom drop down into the closure of the notable event. For example (using the example above), I would create a new field called Category with a drop down list to select the type of category. Status Change urgency Owner Category Description Summary/Notes
Hi I'm trying to get failed login from users who try to authenticate to Splunk using curl authentication, my command was curl -k https://localhost:8089/services/auth/login --data-urlencode usernam... See more...
Hi I'm trying to get failed login from users who try to authenticate to Splunk using curl authentication, my command was curl -k https://localhost:8089/services/auth/login --data-urlencode username=myUser --data-urlencode password=myWrongPass and get an XML response saying that it's incorrect username or password, but when I enter valid credentials from this SPL search command : index="_audit" action="login attempt" curl I only get successful authentication not failed ones. I'm interested to get a list of all failed logins who used curl. Event result : Audit:[timestamp=05-12-2020 16:11:55.106, user=myuser, action=login attempt, info=succeeded reason=user-initiated useragent="curl/7.69.1" clientip=127.0.0.1 session=3a7b3720876a61c93d1584b2b8613fe1][n/a]
Hi all, I am still a Splunk novice but I am looking for some help using the earliest command. I am calculating a duration from the beginning of my search period to the first event in the search p... See more...
Hi all, I am still a Splunk novice but I am looking for some help using the earliest command. I am calculating a duration from the beginning of my search period to the first event in the search period. For example, lets say the time frame is from 08:00 - 09:00 and the first event is seen at 08:15. This is my code : | stats earliest(_time) as FirstEvent | addinfo | eval duration=(FirstEvent - info_min_time) So I am, able to get the timestamp of the earliest event at 08:15 BUT I would also like to get the fields associated with that earliest event. I have tried using two earliest commands but one overrides the other. In short, how do I get the fields associated with the (earliest) 08:15 event?
Attached are my events I want rex to extract the highlighted text from the events and the events are logged under the field name JobName ======================================================== ... See more...
Attached are my events I want rex to extract the highlighted text from the events and the events are logged under the field name JobName ======================================================== krwesx05.krw.app.com-IDPD3VPSEC01-Daily-Incremental-Backup-to-Disk krwesx06.krw.app.com-krwbe3-Daily-Incremental-Backup-to-Disk IDPD2VPIVC01-Application-02-Weekly-Full-Backup-to-StoreOnce-Catalyst IDPD2VPIVC01-Web-Server-01-Weekly-Full-Backup-to-StoreOnce-Catalyst IDPD2VPIVC01-Mail-Server-01-Weekly-Full-Backup-to-StoreOnce-Catalyst IDPD2VPIVC01-File-Servers-Weekly-Full-Backup-to-StoreOnce-Catalyst IDPD2VPIVC01-Mail-Server-01-Daily-Incremental-Backup-to-StoreOnce-Catalyst IDPD2VPIVC01-KRWHR1-Backup-Daily-Incremental-Backup-to-StoreOnce-Catalyst IDPD2VPIVC01-Application-03-Weekly-Full-Backup-to-StoreOnce-Catalyst IDPD2VPIVC01-Application-01-Daily-Incremental-Backup-to-StoreOnce-Catalyst IDPD2VPIVC02-Application-03-Weekly-Full-Backup-to-StoreOnce-Catalyst IDPD2VPIVC01-Application-02-Daily-Incremental-Backup-to-StoreOnce-Catalyst IDPD2VPIVC01-Active-Directory-Weekly-Full-Backup-to-StoreOnce-Catalyst IDPD2VPIVC01-Application-01-Weekly-Full-Backup-to-StoreOnce-Catalyst IDPD2VPIVC02-Active-Directory-Weekly-Full-Backup-to-StoreOnce-Catalyst IDPD2VPIVC01-Mail-Server-02-KRWLN3-Daily-Incremental-Backup-to-StoreOnce-Catalyst idwikppads01.app.com-Daily-Incremental-Backup-to-VTL APP_Gold_VM_Image_Backup_01-Daily-Incremental-Backup-to-VTL APP_Global_AD-Daily-Incremental-Backup-to-VTL SRPWEB9-Daily-Incremental-Backup-to-VTL Post rex I would want results like Daily-Incremental-Backup Weekly-Full-Backup
How can we use spath for below JSON to evaluate if for ConcurrentAsyncGetReportInstances , Remaining/Max*100 is >= 70%? Coul any one please help? { "AnalyticsExternalDataSizeMB":{ "Ma... See more...
How can we use spath for below JSON to evaluate if for ConcurrentAsyncGetReportInstances , Remaining/Max*100 is >= 70%? Coul any one please help? { "AnalyticsExternalDataSizeMB":{ "Max":478600, "Remaining":40960 }, "ConcurrentAsyncGetReportInstances":{ "Max":400, "Remaining":200 }, "ConcurrentEinsteinDataInsightsStoryCreation":{ "Max":5, "Remaining":5 }, "ConcurrentEinsteinDiscoveryStoryCreation":{ "Max":2, "Remaining":2 }, "ConcurrentSyncReportRuns":{ "Max":20, "Remaining":20 }, "DailyAnalyticsDataflowJobExecutions":{ "Max":60, "Remaining":60 }, "DailyAnalyticsUploadedFilesSizeMB":{ "Max":51200, "Remaining":51200 },
I have a query which is using streamstats, eventstats, stats, and transaction (trying to achieve brute force attack logic). It displays the search results when I give the proper date range (from 05/1... See more...
I have a query which is using streamstats, eventstats, stats, and transaction (trying to achieve brute force attack logic). It displays the search results when I give the proper date range (from 05/12/2020 at 17:30:00 to 05/12/2020 at 17:35:00 which is just 5 mins). But the same search doesn't provide me with the same search result but produces another search result when the date range is given like from 05/12/2020 at 17:20:00 to 05/12/2020 at 17:45:00 which is near to 25 mins. Please let me know why this happens? Query used is. index=wineventlog_sec* tag=authentication (action=success OR action=failure) | table _time user dest EventCode action | sort 0 user _time dest | streamstats count as attempts by action user dest reset_on_change=true | streamstats count(eval(attempts=1)) as sessions by user dest | eventstats count as max_attempts by sessions user dest | eval success_session=(sessions-1) | eventstats max(eval(case(match(action,"failure") AND attempts=1 AND max_attempts>50 ,_time))) as lastFailed max(eval(case(match(action,"success") AND attempts=1,_time))) as lastSuccess by action user dest success_session | search attempts=1 | transaction user dest maxspan=1m maxevents=2 | search lastFailed=* AND lastSuccess=*
Hello there, I'm trying to fill a multiselect input with a initial value based on a token. The token is based on a lookup search. <form> <label>dashboard</label> <search> <query> ... See more...
Hello there, I'm trying to fill a multiselect input with a initial value based on a token. The token is based on a lookup search. <form> <label>dashboard</label> <search> <query> | inputlookup lookup.csv </query> <earliest>-24h</earliest> <latest>now</latest> <done> <set token="tok_lookup">$result.column$</set> </done> </search> <fieldset submitButton="false"> <input type="multiselect" token="tok_multiselect" searchWhenChanged="false"> <label>Multi-Select</label> <delimiter>,</delimiter> <fieldForLabel>column</fieldForLabel> <fieldForValue>column</fieldForValue> <search> <query>| loadjob savedsearch="admin:app:multiselect_list"</query> <earliest>-2h@h</earliest> <latest>now</latest> </search> <initialValue>$tok_lookup$</initialValue> </input> </fieldset> ... </form> The inputlookup of lookup.csv does return a single column with multiple rows if run in a seperate search: column ----------- value1 value2 value3 value1, value2, value3 should show up as a seperate initial values in the multiselect input. Currently the values do not show up at all. - If I use the token as default, only the first value "value1" is populated to the multiselect field. Thanks a lot for your help. Regards, Philipp
Hello, I have a search and when I run it, it returns 514.299 events: To speed up load times I have saved and scheduled that search, maintaining the same time range and extending the dispatch... See more...
Hello, I have a search and when I run it, it returns 514.299 events: To speed up load times I have saved and scheduled that search, maintaining the same time range and extending the dispatch.max_count parameter in savedsearches.conf to 600.000 to ensure that no data is lost. When I inspect the scheduled saved search execution, however, I notice that it doesn't return all of the results, even though it scans them: This discrepancy can be as many as 30.000 results: it's never the same amount after every scheduled execution, and it never matches the results returned if I run the search independently.. Any ideas as to why this is happening? Any parameters I can check? Thanks! Andrew
Hi all, Since a few days I am in a battle regarding the following and I am on the loosing edge here. So all help is wanted of course. Instead of "no result found" in the graph area, I want to ... See more...
Hi all, Since a few days I am in a battle regarding the following and I am on the loosing edge here. So all help is wanted of course. Instead of "no result found" in the graph area, I want to have a visual but in that case all "0". My query is as follows: index=index host=test | rex field=_raw "(?ms)^(?:[^ \\n]* ){6}(?P<SyslogMessage>[^:]+)(?:[^ \\n]* ){7}(?P<src_ip>[^ ]+) to (?P<dest_ip>[^ ]+)" | eval msg = if(match(SyslogMessage,"%ABC-1-*"),"alert", if(match(SyslogMessage,"%ABC-2-*"),"critical","Other")) | Search NOT msg="other" | timechart span=360s count(msg) as cnt, first(BaseLine) as Baseline by msg | eval BaseLine=8 I tried several options such as before the last |eval BaseLine=8: | fillnull value=0 cnt Looking for some magic. S
I have search querrie created from json file. Problem is values that i have appear in one row, instead of 3 rows(in json file we have three ids with number and status). Thanks in advance!! alt text
hi, I'm finding lot of these errors in my ITSI cluster (4.4.1/7.3.3) index=_internal 05-12-2020 12:56:01.520 +0200 ERROR SearchParser - The search specifies a macro 'assess_severity' that cannot b... See more...
hi, I'm finding lot of these errors in my ITSI cluster (4.4.1/7.3.3) index=_internal 05-12-2020 12:56:01.520 +0200 ERROR SearchParser - The search specifies a macro 'assess_severity' that cannot be found. Reasons include: the macro name is misspelled, you do not have "read" permission for the macro, or the macro has not been shared with this application. Click Settings, Advanced search, Search Macros to view macro information. I find 4 macros in the app SA-ITOA, all having 1 to 6 different arguments required. Sharing is Global, everyone have read permissions. Anyone with an answere or suggestion on what to do?
Hi, There are 3 events that have been logged exactly at the same time say 2020-04-28 15:39:34. When the search query is using index, the 3 events get displayed separately. But when I am using t... See more...
Hi, There are 3 events that have been logged exactly at the same time say 2020-04-28 15:39:34. When the search query is using index, the 3 events get displayed separately. But when I am using tstats command, it is combining all the 3 events as all of them have logged at the same time. Is there any way to show these events as separate events while using tstats command? Queries that I have used for fetching the data. | tstats count values(Authentication.action) as Authentication.action values(Authentication.src) as Authentication.src values(Authentication.signature_id) as Authentication.signature_id values(Authentication.signature) as Authentication.signature from datamodel=Authentication where (Authentication.action=success OR Authentication.action=failure ) by _time Authentication.user Authentication.dest span=1s The count column shows me exactly how many times the event has occurred at that particular time. So instead of this, is there any way all the events get displayed separately? Thanks in advance.
i have a string 14/04/2020|A3|ABC149251|text i really need can i run something which will trim this string from the end till it get 1st | (pipe symbol)? i tried rex for this but some error ... See more...
i have a string 14/04/2020|A3|ABC149251|text i really need can i run something which will trim this string from the end till it get 1st | (pipe symbol)? i tried rex for this but some error is coming which i am not able to resolve, so thought of taking it the above way. regular expression i am trying is (?^\d{2}\/\d{2}\/\d{2,4}|A\d|\ABC\d*|)(?[\w*\s-]+). getting below error Error in 'rex' command: Encountered the following error while compiling the regex '(?^\d{2}\/\d{2}\/\d{2,4}|A\d|\INC\d*|)(?[\w*\s-]+)': Regex: unrecognized character follows . please either correct my regex or let me know how to trim
We got a requirement to install TA-Meraki app in splunk Downloaded the zip file from the link provided https://splunkbase.splunk.com/app/3018/ copied the zip file to Splunk search Hea... See more...
We got a requirement to install TA-Meraki app in splunk Downloaded the zip file from the link provided https://splunkbase.splunk.com/app/3018/ copied the zip file to Splunk search Head server to the location /opt/splunk/etc/apps Unzipped the file A folder with TA-Meraki has been created. Changed the owner and group of the folder TA-Meraki to splunk:splunk recursively. Restarted Splunk. /opt/splunk/bin/splunk restart We could able to see the app TA-meraki in splunk web To send log data from Meraki to Splunk server, enable and add your syslog server in Network-wide >> General >> Reporting >> Syslog servers. This needs to be done by the Application team on Meraki device. Application team confirmed that this has been done. We have created index Meraki and added the port 514 to Data inputs in Splunk. User complained that the device is logging successfully into splunk but nothing is appearing in the app and the source is still showing syslog rather than Meraki What are the steps that we are missing. Is inputs.conf file needs to be created? If yes, where we need to create the file. On splunk or on Meraki device. Is sourcetype Meraki needs to be created? If yes, can we follow the below method. Go to the path /opt/splunk/etc/apps/TA-meraki/local on Splunk search head server and create props.conf file (props.conf file does not exist) with the following stanza [source=] sourcetype=meraki Restart splunk.
My Search has the below format data. A single host has multiple parameters consists of LED 1..to.20 for each TV and there are 24 TV's , The LED power paramerter has value say Max(val) 34.0 which i... See more...
My Search has the below format data. A single host has multiple parameters consists of LED 1..to.20 for each TV and there are 24 TV's , The LED power paramerter has value say Max(val) 34.0 which is related to PA (power Amplitude) of Low/High, we only want search for PA = Low Query : source="c:\\program files (x86)\\xxxx" "PLogger" TV earliest=-2d@d latest=now PA = Low | stats max(VAL) by host, TV, LED, PA , _time | fields "host" "LED","PA", "TV", "max(VAL)" | Result : host LED PA TV Max (Val) _Time 03192610158 0 Low A1 48.863 2019-12-19 22:00:08.177 03192610158 0 Low A1 48.61 2019-12-20 22:00:08.140 031................. 1 Low A1 44.23 2019-12-19 22:00:08.177 031................. 1 Low A1 45.23 2019-12-20 22:00:08.177 ||||| ||||| 031................. 19 Low A1 49.23 2019-12-19 22:00:08.177 031................. 19 Low A1 50.23 2019-12-20 22:00:08.177 ||||| ||||| 031................. 1 Low A2 52.23 2019-12-19 22:00:08.177 031................. 1 Low A2 53.73 2019-12-20 22:00:08.177 AND Continues for the same host and for each TV and its LED's of 20. Now I need to calculate the percentage difference of LED 1 2..till..19 for each TV ( A1 ---A24) and raise and Alert for any LED's if they drop by 5%. This is the Splunk query I use: source="c:\\program files (x86)\\prysm\\servo\\logs\\vegaservo.log" "PLogger" earliest=-7d@d latest=now TV PA = Low | stats max(VAL) as max_val by host, TILE, Laser, PA , _time | fields host, TV, LED, PA ,max_val, _time |streamstats current=f values(max_val) as prev_val by LED TV host| eval perc_diff=((max_val - prev_val)/((max_val + prev_val)/2)*100) | where perc_diff > 5 Output for one host : Host :::::: TV ::::::: LED:::::::: PA ::::: Max_val ::::: _time :::: Perc_diff ::::: prev_val DESKTOP-3S2CV0M :::: E1 ::::: 16 :::: Low :::: 30.354 ::::: 2020-05-06 10:00:46.221 :::: 5.136 ::::: 28.834 Cross Checking host data for the week: 11 May 2020 05:00:46,276 [4] INFO PLogger : TV = E1, Laser = 16, PA = Low, VAL = 31.512 10 May 2020 05:00:46,211 [11] INFO PLogger : TV = E1, LED = 16, PA = Low, VAL = 30.124 09 May 2020 05:00:46,227 [10] INFO PLogger : TV= E1, LED= 16, PA = Low, VAL = 30.695 08 May 2020 05:00:46,307 [11] INFO PLogger : TV = E1, LED = 16, PA = Low, VAL = 28.731 07 May 2020 05:00:46,666 [5] INFO PLogger - : TV = E1, LED = 16, PA = Low, VAL = 28.452 06 May 2020 05:00:46,221 [16] INFO PLogger -: TV = E1, LED= 16, PA = Low, VAL = **30.354** 05 May 2020 05:00:47,196 [16] INFO PLogger : TV= E1, LED = 16, PA = Low, VAL = **28.834** The problem here is, the value is only getting calculated between last 2 days as you can see the highlighted data above. I am stuck in how to get the alert correctly. How can I get the correct perc_diff alert for the week?
Hi Everyone, Please help me to write cron expression to run a schedule search at 2:30,10:30,18:30 in every day. Thanks in advance.
Hi I'm fairly new to Splunk and I need to round my time field up/down to the nearest hour. For example... If now returns 09:26:52 I want it to be rounded to 09:00:00, if the time is 14:36:18 th... See more...
Hi I'm fairly new to Splunk and I need to round my time field up/down to the nearest hour. For example... If now returns 09:26:52 I want it to be rounded to 09:00:00, if the time is 14:36:18 then 15:00:00. I have searched and can't find or understand how to do this. Is there someone help me with how? Thanks
Hi Team, Tried to pull postgresql data into splunk db connect --> Customer has enabled SSL at their end --> If they disable the SSL it is working --> But customer wants SSL to be enabled . We h... See more...
Hi Team, Tried to pull postgresql data into splunk db connect --> Customer has enabled SSL at their end --> If they disable the SSL it is working --> But customer wants SSL to be enabled . We have an option in Splunk DB connect --> Enable SSL (if I check this box) --> While trying to create Input (it is showing Invalid database connection) What Should I do to enable SSL from splunk DB connect.
I have json data that comes in tracking ID's. An event is created when an ID is "created" and an event is created when an ID is "closed". Each event has the same alert ID and I'm struggling with a qu... See more...
I have json data that comes in tracking ID's. An event is created when an ID is "created" and an event is created when an ID is "closed". Each event has the same alert ID and I'm struggling with a query that dedups the alert ID, and puts the times the ID's were created and closed in the columns. Looking for a table something like this: the times can be when the events were created or there is also a json field that's extracted as "date" Thanks in advance. ID| Created_Time | Closed_Time | x | xxxxxxxxxxxx | xxxxxxxx