All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm currently using the token $results_link$ to get a direct link to alerts when they get triggered. I've also set the "Expires" field to 72 hrs. However, if the alerts get triggered over the weekend... See more...
I'm currently using the token $results_link$ to get a direct link to alerts when they get triggered. I've also set the "Expires" field to 72 hrs. However, if the alerts get triggered over the weekend, the results are always expired when checking them after 48 hours. Is it possibe to have the alert results not expire in 48hrs?
If scheduled search is not being run it's typically due to one of two reasons: 1) it's disabled so it's not being scheduled 2) it's skipped due to SH(C) overload If I remember correctly, there cou... See more...
If scheduled search is not being run it's typically due to one of two reasons: 1) it's disabled so it's not being scheduled 2) it's skipped due to SH(C) overload If I remember correctly, there could be some other rarer reasons typically connected with role capabilities (I think if a user created a scheduled search and then was revoked the schedule_search it could cause some problems or if the owner of the scheduled search was deleted and the search was orphaned). So you can either check which of your searches are disabled, which were skipped and alternatively you can check scheduler log for searches actually run and compare this list with the list of defined searches.
Here's the answer https://community.splunk.com/t5/Splunk-Search/how-to-use-a-field-as-timestamp-for-a-timechart/m-p/145037 Use strptime to format your field Opened_At and create a unixtimestamp T... See more...
Here's the answer https://community.splunk.com/t5/Splunk-Search/how-to-use-a-field-as-timestamp-for-a-timechart/m-p/145037 Use strptime to format your field Opened_At and create a unixtimestamp Then assign that to _time    
 I should add that the format of the Opened_At field is '2023-02-03 15:39:44'
I have 2 events : Event 1 : Timestamp A  UserID:ABC  startevent  Event 2:  Timestamp B  ID:ABC  endevent I want to find time difference between start event and end event . In first event field i... See more...
I have 2 events : Event 1 : Timestamp A  UserID:ABC  startevent  Event 2:  Timestamp B  ID:ABC  endevent I want to find time difference between start event and end event . In first event field is named "UserID" and in second event field is named "ID" .These two fields holds the value of the user for which start and subsequent end event is generated.   How can i get time difference here ? To use transaction i need a shared field .When i use transaction like below:   | transaction userId startswith=(event="startevent") endswith=("endevent") maxevents=2 , i get very few results .        
I'm running into a limitation with Splunk custom apps where I want the admin to be able to set some API key for my 3rd party app and I want everyone to have access to this secret in order to actually... See more...
I'm running into a limitation with Splunk custom apps where I want the admin to be able to set some API key for my 3rd party app and I want everyone to have access to this secret in order to actually run the custom commands that call the 3rd party API, without the admin having to give out list_storage_passwords for everyone if possible. Is there any workaround to this or are we still limited to the workarounds described below? E.g. having to give list_storage_passwords to everyone and then retroactively apply fine-grained access controls to every secret. How are devs accomplishing this? https://community.splunk.com/t5/Splunk-Dev/What-are-secret-storage-permissions-requirements/m-p/641409 --- This idea is 3.5 years old at this point. https://ideas.splunk.com/ideas/EID-I-368  
Hi. Can you show what props you are currently using?
Oct 30 06:55:08 Server1 request-default Cert x.x.x.x - John bank_user Viewer_PIP_PIP_env vu01 Appl Test [30/Oct/2023:06:54:51.849 -0400] "GET /web/appWeb/external/index.do HTTP/1.1" 200 431 7 9 80809... See more...
Oct 30 06:55:08 Server1 request-default Cert x.x.x.x - John bank_user Viewer_PIP_PIP_env vu01 Appl Test [30/Oct/2023:06:54:51.849 -0400] "GET /web/appWeb/external/index.do HTTP/1.1" 200 431 7 9 8080937 x.x.x.x /junctions 25750 - "OU=00000000+CN=John bank_user Viewer_PIP_PIP_env vu01 Appl Test,OU=st,O=Bank,C=us" bfe9a8e8-7712-11ee-ab2e-0050568906b9 "x509: TLSV12: 30" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36" I have above in the log.  I have field extraction (regular expressions) to extract user and in this case "John bank_user Viewer_PIP_PIP_env vu01 Appl Test".  The alert did find this user but reported the user name as "john".  There are some other users, who have space in the name shows up in alert fine. How do I fix the extraction so entire user name shows up in the alert?
I am trying to use the following search to make a timechart on security incident sources, but Splunk is reporting zeros for all the counts which I can confirm is NOT accurate at all. I think the issu... See more...
I am trying to use the following search to make a timechart on security incident sources, but Splunk is reporting zeros for all the counts which I can confirm is NOT accurate at all. I think the issue is because I need to use a different time field for the timeline. Can someone assist me in making this chart work?   index=sir sourcetype=sir | rex field=dv_affected_user "(?<user>[[:alnum:]]{5})\)" | rex mode=sed field=opened_at "s/\.0+$//" | rex mode=sed field=closed_at "s/\.0+$//" | rename opened_at AS Opened_At, closed_at AS "Closed At", number AS "SIR Number", dv_assignment_group AS "Assignment Group", dv_state AS State, short_description AS "Short Description", close_notes AS "Closed Notes", dv_u_organizational_action AS "Org Action", u_concern AS Concern, dv_u_activity_type AS "Activity Type", dv_assigned_to AS "Assigned To" | eval _time=Opened_At | eval Source=coalesce(dv_u_specific_source, dv_u_security_source) | fillnull value=NULL Source | table Source, _time, "SIR Number" | timechart span=1mon count usenull=f by Source  
Getting direct download is deprecated on splunk 9.X?  The app_exporter has many issues, but the biggest one is that it no longer works in splunk cloud as far as I can tell. 
In Dashboard studio i have a panel with a list of the top 10 issuetypes. I want to set 3 tokens with nr 1, 2 and 3 of this top 10 to use thes in a different panel search to show the (full) events. i... See more...
In Dashboard studio i have a panel with a list of the top 10 issuetypes. I want to set 3 tokens with nr 1, 2 and 3 of this top 10 to use thes in a different panel search to show the (full) events. index=.....      ("WARNING -" OR "ERROR -") | rex field=_raw "(?<issuetype>\w+\s-\s\w+)\:" | stats count by application, issuetype | sort by -count | head 10 The result depends and might be: count issuetype 345 ERROR - Connectbus 235 Warning - Queries 76 Error - Export 45 Error - Client 32 Warning - Queue … Now i want to show the events of the top 3 issuetypes of this list in the following panels by storing the first 3 issuetypes in $tokenfirst$ $tokensecond$ and $tokenthird$ and searching for those values. I selected use search result as token, but how do i select only the first 3 results in 3 different tokens (and of course after the top 10 is calculated )
I am migrating to using auth0 for SAML which authenticates with active directory for splunk. Currenlty splunk just uses active directory. I have the realName field set to the “nickname” attribute in ... See more...
I am migrating to using auth0 for SAML which authenticates with active directory for splunk. Currenlty splunk just uses active directory. I have the realName field set to the “nickname” attribute in the saml response which is the username but when I run searches or make dashboards/alerts it is assigned to the user_id attribute which is gibberish. I’m wondering how we can make the knowledge objects assigned to the friendly username instead of the user_id because I’m curious if a user will still be able to see their historical knowledge objects since the owner value is now different. Unless it is somehow mapped to it. 
Change this line so that it takes into account what the previous day is [ search earliest=-1d@d latest=-1d]
Hi, I am looking for some solution how to find in Splunk scheduled searches not used for several weeks by users or apps (for example user left and search is not checked). I tried to focus to audit lo... See more...
Hi, I am looking for some solution how to find in Splunk scheduled searches not used for several weeks by users or apps (for example user left and search is not checked). I tried to focus to audit logs for non ad hoc searches and rest API saved searches but I wasn't able to find some meaningful result for it
Honestly, you just need two sources of truth. One for hardware, which I typically see in customer environments as tenable, crowdstrike, or some sort of application that scans most of the network devi... See more...
Honestly, you just need two sources of truth. One for hardware, which I typically see in customer environments as tenable, crowdstrike, or some sort of application that scans most of the network devices that is reliable. One for identities, which I have used okta, ldap, or some sort of identities service. Then next I would create some spl that would look like the following: (This example I am using okta) index=prod_okta user=*@* | eval identity = user | rename profile.* AS * | eval prefix = honorificPrefix | eval nick = nickName | eval first = firstName | eval last = lastName | eval suffix = honorificSuffix | eval email = ciscoUsername | eval phone = primaryPhone | eval managedBy = manager | eval priority = debugContext.debugData.risk | eval bunit = coalesce(label,department) | eval category = actor.type | eval watchlist = admin_interest | eval startDate = thousandeyesStartDate | eval endDate = thousandeyesTermDate | eval work_city = city | eval work_country = country | eval work_lat = latitude | eval work_long = longitude | eval device_name = client.device | eval work_state = state | eval postal_code = postal | eval employee_num = employeeNumber | eval employee_status = employmentStatus | eval manager_id = managerId | eval manager_email = mgr_email | eval postal_address = postalAddress | eval sam_account_name = sAMAccountName | eval second_email = secondEmail | eval mobile_phone = mobilePhone | eval title = title | stats first(prefix) AS prefix first(nick) AS nick first(first) AS first values(last) AS last first(suffix) AS suffix first(email) AS email first(phone) AS phone first(managedBy) AS managedBy first(priority) AS priority first(bunit) AS bunit first(category) AS category first(watchlist) AS watchlist first(startDate) AS startDate first(endDate) AS endDate first(work_city) AS work_city first(work_country) AS work_country first(work_lat) AS work_lat first(work_long) AS work_long first(device_name) AS device_name first(work_state) AS work_state first(postal_code) AS postal_code first(employee_num) AS employee_num first(employee_status) AS employee_status first(manager_id) AS manager_id first(manager_email) AS manager_email first(postal_address) AS postal_address first(sam_account_name) AS sam_account_name first(second_email) AS second_email first(mobile_phone) AS mobile_phone first(title) AS title by identity | table identity prefix nick first last suffix email phone managedBy priority bunit category watchlist startDate endDate work_city work_country work_lat work_long epkey device_name work_state postal_code employee_num employee_status manager_id manager_email postal_address sam_account_name second_email mobile_phone title | append [| inputlookup okta_identies.csv] | stats first(prefix) AS prefix first(nick) AS nick first(first) AS first values(last) AS last first(suffix) AS suffix first(email) AS email first(phone) AS phone first(managedBy) AS managedBy first(priority) AS priority first(bunit) AS bunit first(category) AS category first(watchlist) AS watchlist first(startDate) AS startDate first(endDate) AS endDate first(work_city) AS work_city first(work_country) AS work_country first(work_lat) AS work_lat first(work_long) AS work_long first(device_name) AS device_name first(work_state) AS work_state first(postal_code) AS postal_code first(employee_num) AS employee_num first(employee_status) AS employee_status first(manager_id) AS manager_id first(manager_email) AS manager_email first(postal_address) AS postal_address first(sam_account_name) AS sam_account_name first(second_email) AS second_email first(mobile_phone) AS mobile_phone first(title) AS title by identity | outputlookup okta_identies.csv Hope this helps.
Hello, I had to rename a bunch of rules yesterday so I cloned them from the Searches, Reports, and Alerts dashboard. They all have global permissions (all apps). For some reason I can't find none of... See more...
Hello, I had to rename a bunch of rules yesterday so I cloned them from the Searches, Reports, and Alerts dashboard. They all have global permissions (all apps). For some reason I can't find none of the rules under the Content Management section. Is there a reason why the cloned rules aren't showing there? Thanks!  
Did you ever find a resolution for this?  I'm having the same issue. UPDATE: For me, the addons were named XXX_inframon.  When I renamed them to remove _inframon, the validation succeeded.  Still ... See more...
Did you ever find a resolution for this?  I'm having the same issue. UPDATE: For me, the addons were named XXX_inframon.  When I renamed them to remove _inframon, the validation succeeded.  Still trying to determine where the mismatch is as this doesn't seem to be a long term solution.
Thanks,   Yeah, I read the same which is why I settled on trying out the MonitorNoHandle option. Based on the reading material, it seemed like this would work. 
I'm looking to close out (or delete) all notable events that were created prior to a specific date time.  The way they're trying to run reports, it is easier to delete them or close them than it woul... See more...
I'm looking to close out (or delete) all notable events that were created prior to a specific date time.  The way they're trying to run reports, it is easier to delete them or close them than it would be to filter them from the reports.  Is there a way to use an eval query (or similar) or would it be best to use the API to close them?  Or am I SOL and I need to filter from the dashboard / report query level?
Thanks All, I came up with the same potential fix using MonitorNoHandle. Unfortunately, although I saw no errors in splunkd.log it did not seem to read lines sent into the file.  I followed the exam... See more...
Thanks All, I came up with the same potential fix using MonitorNoHandle. Unfortunately, although I saw no errors in splunkd.log it did not seem to read lines sent into the file.  I followed the examples to set up the stanza and the MonitorNoHandle.exe was started on the server. I will dig deeper to see what might be going on and post what I find here since there is very little about MonitorNoHandle out there today.