All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers, My requirement is below . I have lookup where 7 hosts defined . when my search is running for both tstats and stats I only get 5 hosts count which are greater than 0 . Can someone h... See more...
Hi Splunkers, My requirement is below . I have lookup where 7 hosts defined . when my search is running for both tstats and stats I only get 5 hosts count which are greater than 0 . Can someone help how can we get count 0 for the field which we are passing from lookup . Query :  | tstats count max(_time) AS latest_event_time where index=firewall sourcetype="cisco:ftd" [| inputlookup Firewall_list.csv | table Primary | Rename Primary AS host] groupby host
I am reaching out to request assistance with building a Splunk query that can extract names from the accessible_by arrays in the log data and present them in a single row. I would like a Splunk que... See more...
I am reaching out to request assistance with building a Splunk query that can extract names from the accessible_by arrays in the log data and present them in a single row. I would like a Splunk query that extracts these names and displays them in a single row with the field accessible_by.name. The expected output is accessible_by.name Pradeep Tallam Lakshmi Prasanna Lakamsani Here is an example of the log message format: message: {BOX CM FPN EVENT requestBody {"channel_id":"","event_type":"COLLAB_INVITE_COLLABORATOR","event_id":"11889593183610133","created_at":"2024-08-01T06:16:28-07:00","created_by":{"type":"user","id":"21066233551","name":"Sk Tajuddin","login":"stajuddin@apple.com"},"source":{"type":"folder","id":"259179951858","sequence_id":"0","etag":"0","name":"My recordings","created_at":"2024-04-16T07:25:18-07:00","modified_at":"2024-05-15T13:53:17-07:00","description":"","size":20180208521,"path_collection":{"total_count":0,"entries":[]},"created_by":{"type":"user","id":"21066233551","name":"Sk Tajuddin","login":"stajuddin@apple.com"},"modified_by":{"type":"user","id":"21066233551","name":"Sk Tajuddin","login":"stajuddin@apple.com"},"trashed_at":null,"purged_at":null,"content_created_at":"2024-04-16T07:25:18-07:00","content_modified_at":"2024-05-15T13:53:17-07:00","owned_by":{"type":"user","id":"21066233551","name":"Sk Tajuddin","login":"stajuddin@apple.com"},"shared_link":null,"folder_upload_email":null,"parent":{"type":"folder","id":"0","sequence_id":null,"etag":null,"name":"All Files"},"item_status":"active","item_collection":{"total_count":5,"entries":[{"type":"file","id":"1516571855910","file_version":{"type":"file_version","id":"1665332707110","sha1":"840bac4b4aeba16ca41c7b7a0b7288a9fd6aecac"},"sequence_id":"0","etag":"0","sha1":"840bac4b4aeba16ca41c7b7a0b7288a9fd6aecac","name":"Chorus deployment part 1.mov"},{"type":"file","id":"1516571257880","file_version":{"type":"file_version","id":"1665332512280","sha1":"0f4147580b7b7b7cde0036124fdda1738f9c1b92"},"sequence_id":"0","etag":"0","sha1":"0f4147580b7b7b7cde0036124fdda1738f9c1b92","name":"Chorus Deployment part 2.mov"},{"type":"file","id":"1516586947142","file_version":{"type":"file_version","id":"1665349857542","sha1":"b953ffaa99b32b66e0a5f607205a5116c0aadc1e"},"sequence_id":"0","etag":"0","sha1":"b953ffaa99b32b66e0a5f607205a5116c0aadc1e","name":"Deployment process of Oomnitza Certs for Retail and AC CR.mov"},{"type":"file","id":"1503788446950","file_version":{"type":"file_version","id":"1651069594950","sha1":"63b30177d283bf8061b7ad4d9712882829950f77"},"sequence_id":"1","etag":"1","sha1":"63b30177d283bf8061b7ad4d9712882829950f77","name":"Roomlookup job deployment.mov"},{"type":"file","id":"1531717232578","file_version":{"type":"file_version","id":"1682330389378","sha1":"f4f0fc6473ce24ef66635ca8d8ff410b734c95a1"},"sequence_id":"0","etag":"0","sha1":"f4f0fc6473ce24ef66635ca8d8ff410b734c95a1","name":"Slack Rooms App weekly deployment.mov"}],"offset":0,"limit":100,"order":[{"by":"type","direction":"ASC"},{"by":"name","direction":"ASC"}]},"collaborators":{"next_marker":"","previous_marker":"","entries":[{"type":"collaboration","id":"55749534901","created_by":{"type":"user","id":"21066233551","name":"Sk Tajuddin","login":"stajuddin@apple.com"},"created_at":"2024-08-01T06:16:27-07:00","modified_at":"2024-08-01T06:16:27-07:00","expires_at":null,"status":"pending","accessible_by":null,"invite_email":"ccs_platform_engineering@group.apple.com","role":"editor","acknowledged_at":null,"item":{"type":"folder","id":"259179951858","sequence_id":"0","etag":"0","name":"My recordings"}},{"type":"collaboration","id":"53612889578","created_by":{"type":"user","id":"21066233551","name":"Sk Tajuddin","login":"stajuddin@apple.com"},"created_at":"2024-04-28T23:34:46-07:00","modified_at":"2024-04-28T23:34:46-07:00","expires_at":null,"status":"accepted","accessible_by":{"type":"user","id":"9049454819","name":"Pradeep Tallam","login":"pradeep_tallam@apple.com","is_active":true},"invite_email":null,"role":"editor","acknowledged_at":"2024-04-28T23:34:46-07:00","item":{"type":"folder","id":"259179951858","sequence_id":"0","etag":"0","name":"My recordings"}},{"type":"collaboration","id":"53611885725","created_by":{"type":"user","id":"21066233551","name":"Sk Tajuddin","login":"stajuddin@apple.com"},"created_at":"2024-04-28T23:40:15-07:00","modified_at":"2024-04-28T23:40:15-07:00","expires_at":null,"status":"accepted","accessible_by":{"type":"user","id":"20437795550","name":"Lakshmi Prasanna Lakamsani","login":"llakamsani@apple.com","is_active":true},"invite_email":null,"role":"editor","acknowledged_at":"2024-04-28T23:40:15-07:00","item":{"type":"folder","id":"259179951858","sequence_id":"0","etag":"0","name":"My recordings"}}]}}}} Could you please assist in creating this query?
Hello, Is /debug/refresh enough to reload role capabilities for authorize.conf? Thanks.   Splunk entreprise v9
I currently do a search monthly for searches/jobs that take a long time. I then look up the job and if there is an alert associated with it. I will then work with teams/individuals to see if the job ... See more...
I currently do a search monthly for searches/jobs that take a long time. I then look up the job and if there is an alert associated with it. I will then work with teams/individuals to see if the job can be measured as a metric to trigger in SignalFX instead. However, there are some jobs I cannot find, and I think I am missing some part of the logic of how things work. I run this search        index=_internal sourcetype=scheduler user!=nobody | table _time user savedsearch_name status scheduled_time run_time result_count | sort run_time desc     Results look like this I try to look up the savedsearch_name  (like Weekly Sort Options) on this page -splunkcloud.com/en-GB/app/search/job_manager but nothing comes up.  Some savedsearch_name show up with results, others don't and I'm not sure why?
Has anybody here ever cracked the nut on how to send Splunk messages triggered by an alert to a Microsoft Teams "chat" (not a Teams channel)
Hi all, We recently ran into issues with our heavy forwarder being unable to connect to certain IPs in our Splunk Cloud environment.  07-22-2024 19:20:14.384 +0000 WARN AutoLoadBalancedConnecti... See more...
Hi all, We recently ran into issues with our heavy forwarder being unable to connect to certain IPs in our Splunk Cloud environment.  07-22-2024 19:20:14.384 +0000 WARN AutoLoadBalancedConnectionStrategy [2042120 TcpOutEloop] - Cooked connection to ip=[[IP1]]:9997 timed out 07-22-2024 19:19:54.508 +0000 WARN AutoLoadBalancedConnectionStrategy [2042120 TcpOutEloop] - Cooked connection to ip=[[IP2]]:9997 timed out 07-22-2024 19:19:34.584 +0000 WARN AutoLoadBalancedConnectionStrategy [2042120 TcpOutEloop] - Cooked connection to ip=[[IP3]]:9997 timed out This appears to be a pretty common error based on what I've seen in other community posts, and typically they are related to a firewall issue. I wanted to document that in our case, the issue was related to the IPs not being assigned to indexers on the Splunk Cloud instance.  According to Splunk support, "Usually, these DNS inputs (inputs1.companyname.splunkcloud.com, inputs2.........., inputs15.companyname.splunkcloud.com) resolve to the aforementioned IP tables defined, but these IPs should be then linked to the indexers which is not what we currently have present." I assume this is an uncommon root cause, and wanted to put it out there as another troubleshooting option when investigating this issue.  I hope it helps. 
Hi team,  Basic questions, How do i check the following on Splunk cloud: what services I am using  what applications and what I am ingesting 
I have header panels on a dashboard. Say this dropdown has a token called tok_panel. Is there a way to make the panels depend on specific values of tok_panel? i.e., if I select "All" in the dropdown... See more...
I have header panels on a dashboard. Say this dropdown has a token called tok_panel. Is there a way to make the panels depend on specific values of tok_panel? i.e., if I select "All" in the dropdown, only panel "All" should be visible instead of *. In my case i'm using Static options for all to display. Others options coming based on |stats c by field name. @gcusello  @ITWhisperer  @PickleRick @richgalloway 
Hi, I am new to Splunk and would like to build a dashboard to find all hosts in environment. This should query all logs to pick up WSL environments, devices ingesting from my security tools and overa... See more...
Hi, I am new to Splunk and would like to build a dashboard to find all hosts in environment. This should query all logs to pick up WSL environments, devices ingesting from my security tools and overall just anything with a hostname and classify it as domain joined, server or workstation. I am using this to then see the devices that has the forwarder installed and then would correlate to see what devices require the splunk forwarder. index="_internal" source="*metrics.log*" group=tcpin_connections | dedup hostname | table date_hour, date_minute, date_mday, date_month, date_year, hostname, sourceIp, fwdType ,guid ,version ,build ,os ,arch | stats count
I know this question was asked about 2 years ago but I didn't see any responses, so I am going to ask it again.  Is there a way to add a tooltip to a table in Dashboard Studio.  I want the tooltip to... See more...
I know this question was asked about 2 years ago but I didn't see any responses, so I am going to ask it again.  Is there a way to add a tooltip to a table in Dashboard Studio.  I want the tooltip to be dynamic in that the wording comes from a lookup table, based on
Is there a way to export the ticket information from mission control with the analyst notes, and resolution ?    If it's not possible directly from MC, is there a way to do that via splunk search ?... See more...
Is there a way to export the ticket information from mission control with the analyst notes, and resolution ?    If it's not possible directly from MC, is there a way to do that via splunk search ?    thanks 
In my case there is an index with field OP which has a duration TT . Of course there are a lot of records with different OPs and diffent TTs | stats perec25(TT) as Q1, median(TT) as Q2MEDIAN, perc7... See more...
In my case there is an index with field OP which has a duration TT . Of course there are a lot of records with different OPs and diffent TTs | stats perec25(TT) as Q1, median(TT) as Q2MEDIAN, perc75(TT) as Q3, perc98(TT) as P98 by OP Here is the way I count quartiles and 98percentile of my set. The result is four values between 2sek.(Q1 ) and 40sek.(P98) for every OP. Last time @ITWhisperer mestioned about command BIN. I like it! I wondered about creating 10 bins instead ( kind of every10 percentile). I did somethink like | bin TIMETAKEN bins=10 |stats count(TIMETAKEN) by TIMETAKEN and expected to see 10 bins but the result was : TIMETAKEN count(TIMETAKEN) 0-10 6393 10-20 389 20-30 15 40-50 2 so no 10 bins but only 4 What am I doing wrong ? And how to create 10 bins for each OP ? Something like | bin TIMETAKEN bins=10 |stats count(TIMETAKEN) by OP   ???      
Hello All, I have a dashboard for vulnerability tracking but i would like to add some custom changes for example  Vuln_number dv_risk_rating dv_assignment_group short_description VIT966236... See more...
Hello All, I have a dashboard for vulnerability tracking but i would like to add some custom changes for example  Vuln_number dv_risk_rating dv_assignment_group short_description VIT9662368 2 - High XYZ R7-msft-cve-2024-38077 detected on  VIT9662366 2 - High XYZ R7-msft-cve-2024-38074 detected on  VIT9662367 2 - High XYZ R7-msft-cve-2024-38076 detected on ics028159223 VIT9662265 2 - High XYZ R7-msft-cve-2024-38077 detected on  VIT9662260 2 - High XYZ R7-msft-cve-2024-38074 detected on    I need a table with comments in the status column ( The comments are static either there is no action or i have to fix in next release or exception ) so only 3 Can i give that as a dropdown and then select that Vulnerability and assign status from drop down Status   Status_dropdown No Action Fix in next release Exception Raised     dv_number dv_risk_rating dv_assignment_group short_description status VIT9662368 2 - High XYZ R7-msft-cve-2024-38077 detected on  No action VIT9662366 2 - High XYZ R7-msft-cve-2024-38074 detected on  Fix in next release  VIT9662367 2 - High XYZ R7-msft-cve-2024-38076 detected on  Exception Raised VIT9662265 2 - High XYZ R7-msft-cve-2024-38077 detected on  Fix in next release  VIT9662260 2 - High XYZ R7-msft-cve-2024-38074 detected on  No action
Hello Everyone, With the below query       <my_search_index> | spath uri | search uri="/vehicle/orders/v1" OR uri="/vehicle/orders/v1*/validate" OR uri="/vehicle/orders/v1*/process" OR uri="/ve... See more...
Hello Everyone, With the below query       <my_search_index> | spath uri | search uri="/vehicle/orders/v1" OR uri="/vehicle/orders/v1*/validate" OR uri="/vehicle/orders/v1*/process" OR uri="/vehicle/orders/v1*/processInsurance" | eval Operations=case( searchmatch("/vehicle/orders/v1*/processInsurance"),"processInsurance", searchmatch("/vehicle/orders/v1/*/validate"),"validateOrder", searchmatch("/vehicle/orders/v1/*/process"),"processOrder", searchmatch("/vehicle/orders/v1"),"createOrder") | stats count as hits avg(request_time) as average perc90(request_time) as response90 by Operations | eval average=round(average,2),response90=round(response90,2)        I am able to construct the table: Apart from the 4 url patterns mentioned in query I need to include following url pattern for getOrder uri: /vehicle/orders/v1/dbd20er9-g7c3-4e71-z089-gc1ga8272179 from the raw splunk log { "request_timestamp ": "02/Jan/1984:09:05:04", "response_timestamp": "01/Jan/1984:09:05:04 +0000", "kong_request_id": "my_kong_req_id", "ek-correlation-id": "my_corr_id", "ek-request-id": "my_req_id", "ek-transaction-id": "", "req_id": "", "channel_name": "", "logType": "kong", "traceparent": "0traceparent", "request_method": "GET", "remote_addr": "1.2.3.4", "server_addr": "5.5.6.6", "scheme": "https", "host": "my.host.com", "status": 200, "request_method": "GET", "uri": "/vehicle/orders/v1/dbd20er9-g7c3-4e71-z089-gc1ga8272179", "server_protocol": "HTTP/1.1", "bytes_sent": 23663, "body_bytes_sent": 23547, "request_length": 1367, "http_referer": "-", "http_user_agent": "-", "request_time": "0.010", "upstream_response_time": "0.008", "upstream_addr": "1.3.5.7", "http_content_type": "application/json", "upstream_host": "my.host.com" } Not sure how do I change my query to include the required url pattern. If I try this: /vehicle/orders/v1/*   or /vehicle/orders/v1/*-*-*-*-* it might include the count of below patterns as well: /payment/orders/v1*/processInsurance /payment/orders/v1/*/validate /payment/orders/v1/*/process /payment/orders/v1 Appreciate your help.
Good Morning everyone, I'm working as a Developer for Apps, Dashboards, Workflows and Use Cases in Splunk since a couple of years. In this a role I've successfully requested several developer licens... See more...
Good Morning everyone, I'm working as a Developer for Apps, Dashboards, Workflows and Use Cases in Splunk since a couple of years. In this a role I've successfully requested several developer licenses.   So I did last week's Tuesday for the first time and yesterday again. Since then nothing ever happened, especially no new license was sent, nor any kind of response.   I'd be very pleased if you could make it (the dev. license) happen Thank you very much in advance.   Kind Regards Ekkehard
Hi all I am trying to fetch incident details from servicenow, but its showing duplicate values      index=acn_lendlease_certificate_tier3_idx tower=Entrust_Certificate | join type=left source_... See more...
Hi all I am trying to fetch incident details from servicenow, but its showing duplicate values      index=acn_lendlease_certificate_tier3_idx tower=Entrust_Certificate | join type=left source_host max=0 [search index=acn_ac_snow_ticket_idx code_message=create uid="*Saml : Days to expire*" OR uid="*Self_Signed : Days to expire*" OR uid="*CA : Days to expire*" OR uid="*Entrust : Days to expire*" | rex field=_raw "\"(?<INC>INC\d+)," | rex field=uid "(?i)^(?P<source_host>.+?)__" | table INC uid log_description source_host | dedup INC uid log_description source_host | rename INC as "Ticket_Number"] | fillnull value="NA" Ticket_Number | stats latest(tower) as Tower, latest(source_host) as source_host , latest(metric_value) as "Days To Expire", latest(alert_value) as alert_value, latest(add_info) as "Additional Info" by instance,Ticket_Number | eval alert_value=case(alert_value==100,"Active",alert_value==300,"About to Expire", alert_value==500,"Expired") | search Tower="*" alert_value="*" alert_value="About to Expire" | sort "Days To Expire" | dedup instance | rename instance as "Serial Number / Server ID", Tower as "Certificate Type" , source_host as Certificate , alert_value as "Certificate Status"
I'm building a dashboard in Splunk dashboard studio which consists of a lot of tiles containing images. On clicking on any of the tiles, you get routed to a new dashboard in new tab. I want to add a ... See more...
I'm building a dashboard in Splunk dashboard studio which consists of a lot of tiles containing images. On clicking on any of the tiles, you get routed to a new dashboard in new tab. I want to add a hover function to both blue & orange tile so that as soon as the user hover on any of the tiles, he should be able to see a brief description about the dashboard (which gets opened in new tab).  Is there a way to do that?  
I am trying to get value of a field from a previous scheduled savedsearch in a new field using loadjob, however unable to get it to work. I am using something like: index=my_pers_index sourcetype=A... See more...
I am trying to get value of a field from a previous scheduled savedsearch in a new field using loadjob, however unable to get it to work. I am using something like: index=my_pers_index sourcetype=ACCT | eval userid = [| loadjob savedsearch="myuserid:my_app:my_saved_search" | return actor] wherein, myuserid - owner id my_app - is the application name my_saved_search - name of the saved search that is present in savedsearches.conf & is scheduled actor is a field name in - my_saved_search
Hi, I need help as I'm new to Splunk and was assigned this role by a resigned administrator. I need to find out the expiration date of our Splunk Cloud license. I can't access the menu: Settings > S... See more...
Hi, I need help as I'm new to Splunk and was assigned this role by a resigned administrator. I need to find out the expiration date of our Splunk Cloud license. I can't access the menu: Settings > System > Permissions (I've checked the assigned user's capabilities but there are no permissions for license_edit, license_read, license_tab, or license_view_warnings). Is it possible that there's a license manager somewhere on-premise since we're using Heavy Forwarder and Deployment Server? Please advise on what I can do if I can't contact the previous administrator. What information should I gather beforehand to be ready to open a Support Ticket? I apologize if I've misunderstood anything.
Good day, I have installed Splunk ES v9.2.1 on a Linux server (CentOS 7.9). On Splunk ES server, I have installed Splunk Addon for Unix and Linux with all scripts including rlog.sh enabled. I have a... See more...
Good day, I have installed Splunk ES v9.2.1 on a Linux server (CentOS 7.9). On Splunk ES server, I have installed Splunk Addon for Unix and Linux with all scripts including rlog.sh enabled. I have also configured Splunk Forwarder v9.2.1 a Linux client (CentOS-7.9). Splunk ES server is receiving logs from client as per normal. But I still have a problem with auditd logs coming from client. The USER_CMD part of auditd logs still appear as HEX format, instead of ASCII.    For example, part of the log reads   USER_CMD=636174202F6574632F736861646F77   where I expect a decoded value to be ASCII as in:   USER_CMD=cat /etc/shadow     What am I doing wrong? And what can I do to view auditd logs in Splunk without me as an analyst decode each log entry one at a time?