All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Woodcock, Do you know if this is still the case nowadays (2024)? thanks.    
Hi. I think you may be hitting the dispatch.ttl setting https://community.splunk.com/t5/Splunk-Search/What-exactly-does-the-ttl-mechanism-do/td-p/446152 Use advanced edit on your search and see... See more...
Hi. I think you may be hitting the dispatch.ttl setting https://community.splunk.com/t5/Splunk-Search/What-exactly-does-the-ttl-mechanism-do/td-p/446152 Use advanced edit on your search and see what yours is set to.
hi @inventsekar Thank you , you are right,  some events not have that particular "log_processed.message". when i put | spath input=_raw i am seeing the events in table format but also seeing the du... See more...
hi @inventsekar Thank you , you are right,  some events not have that particular "log_processed.message". when i put | spath input=_raw i am seeing the events in table format but also seeing the duplicate events. can we avoid that. index="sample" "log_processed.app"=mercury "log_processed.traceId"=dc57c0b7f0e8cfdee5002b62873f5de7 | spath input=_raw | table _time, log_processed.message
You have a common field, just not a common name.  That's easy to fix using the coalesce function. index=foo (UserID=* OR ID=*) | eval commonID=coalesce(UserID, ID) | stats min(_time) as startTime, m... See more...
You have a common field, just not a common name.  That's easy to fix using the coalesce function. index=foo (UserID=* OR ID=*) | eval commonID=coalesce(UserID, ID) | stats min(_time) as startTime, max(_time) as endTime, values(*) as * by commonID | eval diff=endTime - startTime  
Hi, I stumbled across this while searching the same error, and thought I'd provide an answer in case someone else comes along from the first hit in their favorite search engine.  Have you tried d... See more...
Hi, I stumbled across this while searching the same error, and thought I'd provide an answer in case someone else comes along from the first hit in their favorite search engine.  Have you tried doing exactly what it says to do? Specifically, here's the process on my own machine.  Note I'm starting out in /opt/splunk/etc/auth.  From there, let's find the path of the sslRootCAPath file specified in server.conf splunk@curie:/opt/splunk/etc/auth$ grep sslRootCAPath ../system/local/server.conf sslRootCAPath = /opt/splunk/etc/auth/mycerts/chain.pem  Now that I know the active cert, I can make a backup copy of it just in case splunk@curie:/opt/splunk/etc/auth$ cp mycerts/chain.pem mycerts/chain.pem.2024-01-24 then append appsCA.pem to that file and restart Splunk. splunk@curie:/opt/splunk/etc/auth$ cat appsCA.pem >> mycerts/chain.pem splunk@curie:/opt/splunk/etc/auth$ splunk restart Worked like a charm.
I'm currently using the token $results_link$ to get a direct link to alerts when they get triggered. I've also set the "Expires" field to 72 hrs. However, if the alerts get triggered over the weekend... See more...
I'm currently using the token $results_link$ to get a direct link to alerts when they get triggered. I've also set the "Expires" field to 72 hrs. However, if the alerts get triggered over the weekend, the results are always expired when checking them after 48 hours. Is it possibe to have the alert results not expire in 48hrs?
If scheduled search is not being run it's typically due to one of two reasons: 1) it's disabled so it's not being scheduled 2) it's skipped due to SH(C) overload If I remember correctly, there cou... See more...
If scheduled search is not being run it's typically due to one of two reasons: 1) it's disabled so it's not being scheduled 2) it's skipped due to SH(C) overload If I remember correctly, there could be some other rarer reasons typically connected with role capabilities (I think if a user created a scheduled search and then was revoked the schedule_search it could cause some problems or if the owner of the scheduled search was deleted and the search was orphaned). So you can either check which of your searches are disabled, which were skipped and alternatively you can check scheduler log for searches actually run and compare this list with the list of defined searches.
Here's the answer https://community.splunk.com/t5/Splunk-Search/how-to-use-a-field-as-timestamp-for-a-timechart/m-p/145037 Use strptime to format your field Opened_At and create a unixtimestamp T... See more...
Here's the answer https://community.splunk.com/t5/Splunk-Search/how-to-use-a-field-as-timestamp-for-a-timechart/m-p/145037 Use strptime to format your field Opened_At and create a unixtimestamp Then assign that to _time    
 I should add that the format of the Opened_At field is '2023-02-03 15:39:44'
I have 2 events : Event 1 : Timestamp A  UserID:ABC  startevent  Event 2:  Timestamp B  ID:ABC  endevent I want to find time difference between start event and end event . In first event field i... See more...
I have 2 events : Event 1 : Timestamp A  UserID:ABC  startevent  Event 2:  Timestamp B  ID:ABC  endevent I want to find time difference between start event and end event . In first event field is named "UserID" and in second event field is named "ID" .These two fields holds the value of the user for which start and subsequent end event is generated.   How can i get time difference here ? To use transaction i need a shared field .When i use transaction like below:   | transaction userId startswith=(event="startevent") endswith=("endevent") maxevents=2 , i get very few results .        
I'm running into a limitation with Splunk custom apps where I want the admin to be able to set some API key for my 3rd party app and I want everyone to have access to this secret in order to actually... See more...
I'm running into a limitation with Splunk custom apps where I want the admin to be able to set some API key for my 3rd party app and I want everyone to have access to this secret in order to actually run the custom commands that call the 3rd party API, without the admin having to give out list_storage_passwords for everyone if possible. Is there any workaround to this or are we still limited to the workarounds described below? E.g. having to give list_storage_passwords to everyone and then retroactively apply fine-grained access controls to every secret. How are devs accomplishing this? https://community.splunk.com/t5/Splunk-Dev/What-are-secret-storage-permissions-requirements/m-p/641409 --- This idea is 3.5 years old at this point. https://ideas.splunk.com/ideas/EID-I-368  
Hi. Can you show what props you are currently using?
Oct 30 06:55:08 Server1 request-default Cert x.x.x.x - John bank_user Viewer_PIP_PIP_env vu01 Appl Test [30/Oct/2023:06:54:51.849 -0400] "GET /web/appWeb/external/index.do HTTP/1.1" 200 431 7 9 80809... See more...
Oct 30 06:55:08 Server1 request-default Cert x.x.x.x - John bank_user Viewer_PIP_PIP_env vu01 Appl Test [30/Oct/2023:06:54:51.849 -0400] "GET /web/appWeb/external/index.do HTTP/1.1" 200 431 7 9 8080937 x.x.x.x /junctions 25750 - "OU=00000000+CN=John bank_user Viewer_PIP_PIP_env vu01 Appl Test,OU=st,O=Bank,C=us" bfe9a8e8-7712-11ee-ab2e-0050568906b9 "x509: TLSV12: 30" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36" I have above in the log.  I have field extraction (regular expressions) to extract user and in this case "John bank_user Viewer_PIP_PIP_env vu01 Appl Test".  The alert did find this user but reported the user name as "john".  There are some other users, who have space in the name shows up in alert fine. How do I fix the extraction so entire user name shows up in the alert?
I am trying to use the following search to make a timechart on security incident sources, but Splunk is reporting zeros for all the counts which I can confirm is NOT accurate at all. I think the issu... See more...
I am trying to use the following search to make a timechart on security incident sources, but Splunk is reporting zeros for all the counts which I can confirm is NOT accurate at all. I think the issue is because I need to use a different time field for the timeline. Can someone assist me in making this chart work?   index=sir sourcetype=sir | rex field=dv_affected_user "(?<user>[[:alnum:]]{5})\)" | rex mode=sed field=opened_at "s/\.0+$//" | rex mode=sed field=closed_at "s/\.0+$//" | rename opened_at AS Opened_At, closed_at AS "Closed At", number AS "SIR Number", dv_assignment_group AS "Assignment Group", dv_state AS State, short_description AS "Short Description", close_notes AS "Closed Notes", dv_u_organizational_action AS "Org Action", u_concern AS Concern, dv_u_activity_type AS "Activity Type", dv_assigned_to AS "Assigned To" | eval _time=Opened_At | eval Source=coalesce(dv_u_specific_source, dv_u_security_source) | fillnull value=NULL Source | table Source, _time, "SIR Number" | timechart span=1mon count usenull=f by Source  
Getting direct download is deprecated on splunk 9.X?  The app_exporter has many issues, but the biggest one is that it no longer works in splunk cloud as far as I can tell. 
In Dashboard studio i have a panel with a list of the top 10 issuetypes. I want to set 3 tokens with nr 1, 2 and 3 of this top 10 to use thes in a different panel search to show the (full) events. i... See more...
In Dashboard studio i have a panel with a list of the top 10 issuetypes. I want to set 3 tokens with nr 1, 2 and 3 of this top 10 to use thes in a different panel search to show the (full) events. index=.....      ("WARNING -" OR "ERROR -") | rex field=_raw "(?<issuetype>\w+\s-\s\w+)\:" | stats count by application, issuetype | sort by -count | head 10 The result depends and might be: count issuetype 345 ERROR - Connectbus 235 Warning - Queries 76 Error - Export 45 Error - Client 32 Warning - Queue … Now i want to show the events of the top 3 issuetypes of this list in the following panels by storing the first 3 issuetypes in $tokenfirst$ $tokensecond$ and $tokenthird$ and searching for those values. I selected use search result as token, but how do i select only the first 3 results in 3 different tokens (and of course after the top 10 is calculated )
I am migrating to using auth0 for SAML which authenticates with active directory for splunk. Currenlty splunk just uses active directory. I have the realName field set to the “nickname” attribute in ... See more...
I am migrating to using auth0 for SAML which authenticates with active directory for splunk. Currenlty splunk just uses active directory. I have the realName field set to the “nickname” attribute in the saml response which is the username but when I run searches or make dashboards/alerts it is assigned to the user_id attribute which is gibberish. I’m wondering how we can make the knowledge objects assigned to the friendly username instead of the user_id because I’m curious if a user will still be able to see their historical knowledge objects since the owner value is now different. Unless it is somehow mapped to it. 
Change this line so that it takes into account what the previous day is [ search earliest=-1d@d latest=-1d]
Hi, I am looking for some solution how to find in Splunk scheduled searches not used for several weeks by users or apps (for example user left and search is not checked). I tried to focus to audit lo... See more...
Hi, I am looking for some solution how to find in Splunk scheduled searches not used for several weeks by users or apps (for example user left and search is not checked). I tried to focus to audit logs for non ad hoc searches and rest API saved searches but I wasn't able to find some meaningful result for it
Honestly, you just need two sources of truth. One for hardware, which I typically see in customer environments as tenable, crowdstrike, or some sort of application that scans most of the network devi... See more...
Honestly, you just need two sources of truth. One for hardware, which I typically see in customer environments as tenable, crowdstrike, or some sort of application that scans most of the network devices that is reliable. One for identities, which I have used okta, ldap, or some sort of identities service. Then next I would create some spl that would look like the following: (This example I am using okta) index=prod_okta user=*@* | eval identity = user | rename profile.* AS * | eval prefix = honorificPrefix | eval nick = nickName | eval first = firstName | eval last = lastName | eval suffix = honorificSuffix | eval email = ciscoUsername | eval phone = primaryPhone | eval managedBy = manager | eval priority = debugContext.debugData.risk | eval bunit = coalesce(label,department) | eval category = actor.type | eval watchlist = admin_interest | eval startDate = thousandeyesStartDate | eval endDate = thousandeyesTermDate | eval work_city = city | eval work_country = country | eval work_lat = latitude | eval work_long = longitude | eval device_name = client.device | eval work_state = state | eval postal_code = postal | eval employee_num = employeeNumber | eval employee_status = employmentStatus | eval manager_id = managerId | eval manager_email = mgr_email | eval postal_address = postalAddress | eval sam_account_name = sAMAccountName | eval second_email = secondEmail | eval mobile_phone = mobilePhone | eval title = title | stats first(prefix) AS prefix first(nick) AS nick first(first) AS first values(last) AS last first(suffix) AS suffix first(email) AS email first(phone) AS phone first(managedBy) AS managedBy first(priority) AS priority first(bunit) AS bunit first(category) AS category first(watchlist) AS watchlist first(startDate) AS startDate first(endDate) AS endDate first(work_city) AS work_city first(work_country) AS work_country first(work_lat) AS work_lat first(work_long) AS work_long first(device_name) AS device_name first(work_state) AS work_state first(postal_code) AS postal_code first(employee_num) AS employee_num first(employee_status) AS employee_status first(manager_id) AS manager_id first(manager_email) AS manager_email first(postal_address) AS postal_address first(sam_account_name) AS sam_account_name first(second_email) AS second_email first(mobile_phone) AS mobile_phone first(title) AS title by identity | table identity prefix nick first last suffix email phone managedBy priority bunit category watchlist startDate endDate work_city work_country work_lat work_long epkey device_name work_state postal_code employee_num employee_status manager_id manager_email postal_address sam_account_name second_email mobile_phone title | append [| inputlookup okta_identies.csv] | stats first(prefix) AS prefix first(nick) AS nick first(first) AS first values(last) AS last first(suffix) AS suffix first(email) AS email first(phone) AS phone first(managedBy) AS managedBy first(priority) AS priority first(bunit) AS bunit first(category) AS category first(watchlist) AS watchlist first(startDate) AS startDate first(endDate) AS endDate first(work_city) AS work_city first(work_country) AS work_country first(work_lat) AS work_lat first(work_long) AS work_long first(device_name) AS device_name first(work_state) AS work_state first(postal_code) AS postal_code first(employee_num) AS employee_num first(employee_status) AS employee_status first(manager_id) AS manager_id first(manager_email) AS manager_email first(postal_address) AS postal_address first(sam_account_name) AS sam_account_name first(second_email) AS second_email first(mobile_phone) AS mobile_phone first(title) AS title by identity | outputlookup okta_identies.csv Hope this helps.