All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

You may have encountered a case where you have to update the operating system version where Splunk resides, in this case Red Hat 7.x to 9.x, is there any consideration that should be taken into accou... See more...
You may have encountered a case where you have to update the operating system version where Splunk resides, in this case Red Hat 7.x to 9.x, is there any consideration that should be taken into account, considering that there are two instances that fulfill the indexer role and there is another cluster instance that manages both, the latter will not be updated. I was thinking of cloning each server and updating it in an isolated network, then exchanging them one by one in the production environment, you will know if that works or I should apply another strategy
Hello Splunkers!! I have a Splunk dashboard where I am using a drilldown to dynamically load images from a media server. However, the images do not load initially. The strange part is that when ... See more...
Hello Splunkers!! I have a Splunk dashboard where I am using a drilldown to dynamically load images from a media server. However, the images do not load initially. The strange part is that when I go to Edit > Source and then simply Save the dashboard again (without making any changes), the images start loading correctly. Why is this happening, and how can I permanently fix this issue without needing to manually edit and save the dashboard every time? Any insights or solutions would be greatly appreciated! Always getting below error. After performing Edit > Source action. Images are loading perfectly.  
We have a requirement to exclude or remove few fields from the event we receive it in Splunk. Already we have extracted json data by giving condition in props.conf and below is the sample event - { ... See more...
We have a requirement to exclude or remove few fields from the event we receive it in Splunk. Already we have extracted json data by giving condition in props.conf and below is the sample event - { [-]    adf: true    all_request_headers: { [+]    }    all_response_headers: { [+]    }    avg_ingress_latency_be: 0    avg_ingress_latency_fe: 0    cacheable: true    client_dest_port: 443    client_insights:    client_ip: XXXXXXXX    client_rtt: 1    client_src_port: 13353    compression: NO_COMPRESSION_CAN_BE_COMPRESSED    compression_percentage: 0    conn_est_time_be: 6    conn_est_time_fe: 0    headers_received_from_server: { [+]    }    headers_sent_to_server: { [+]    }    host: wasphictst-wdc.hc.cloud.uk.sony    http_version: 1.1    jwt_log: { [+]    }    log_id: 121721    max_ingress_latency_be: 0    max_ingress_latency_fe: 0    method: GET    persistent_session_id: 3472328296699025517    pool: pool-cac2726e-acd1-4225-8ac8-72ebd82a57a6    pool_name: p-wasphictst-wdc.hc.cloud.uk.sony-wdc-443    report_timestamp: 2025-02-18T11:33:23.069736Z    request_headers: 577    request_id: euh-xfiN-7Ikq    request_length: 148    request_state: AVI_HTTP_REQUEST_STATE_SEND_RESPONSE_BODY_TO_CLIENT    response_code: 404    response_content_type: text/html; charset=iso-8859-1    response_headers: 13    response_length: 6148    response_time_first_byte: 61    response_time_last_byte: 61    rewritten_uri_query: test=%26%26%20whoami    server_conn_src_ip: 128.160.77.237    server_dest_port: 80    server_ip: 128.160.73.123    server_name: 128.160.73.123    server_response_code: 404    server_response_length: 373    server_response_time_first_byte: 52    server_response_time_last_byte: 61    server_rtt: 9    server_src_port: 56233    servers_tried: 1    service_engine: GB-DRN-AB-Tier2-se-vxeuz    significant: 0    significant_log: [ [+]    ]    sni_hostname: wasphictst-wdc.hc.cloud.uk.sony    source_ip: 128.164.6.186    ssl_cipher: TLS_AES_256_GCM_SHA384    ssl_session_id: 935810081909dc8c018416502ece5d00    ssl_version: TLSv1.3    tenant_name: admin    udf: false    uri_path: /cmd    uri_query: test=&& whoami    user_agent: insomnia/2021.5.3    vcpu_id: 0    virtualservice: virtualservice-e52d1117-b508-4a6d-9fb5-f03ca6319af7    vs_ip: 128.160.71.101    vs_name: v-wasphictst-wdc.hc.cloud.uk.sony-443    waf_log: { [+]    } } We need to remove few fields from new and existing events like "avg_ingress_latency_be",  "avg_ingress_latency_fe", "request_state", "server_response_code" and many of the fields while onboarding. Where can I write the logic to exclude these fields because user app owners don't want these fields while viewing the data and source cannot edit that. We need to do this before on-boarding.
Hello, I have installed Splunk enterprise on windows server in order to retrieve Netflow (Port 2055) and Syslog (Port 514) information. I added in “Data Inputs” the UDP 2055 sourcetype=“Netflow” an... See more...
Hello, I have installed Splunk enterprise on windows server in order to retrieve Netflow (Port 2055) and Syslog (Port 514) information. I added in “Data Inputs” the UDP 2055 sourcetype=“Netflow” and 514 sourcetype=“Syslog”. In “Forwarding and receiving” and then “Forwarding defaults” I checked yes. But I can't see anything in Splunk. So I installed Wireshark, which does see Syslog and Netflow data. I checked with PowerShell that the port was open and that splunkd was listening (netstat -ano | findstr :2055) & (tasklist | findstr XXXX). I've also installed several add-ons but with no conclusive result. Has anyone had this problem before or have any clues as to how to solve it? Thanks in advance ----------------------------------------------------------- Bonjour, J'ai installé Splunk entreprise sur windows serveur afin de récupérer les infos de type Netflow (Port 2055) et Syslog (Port 514). J'ai ajouté dans "Data Inputs" l'UDP 2055 sourcetype="Netflow" et 514 sourcetype="Syslog". Dans "Forwarding and receiving" et puis "Forwarding defaults" j'ai coché oui. Mais je ne vois absolument rien dans Splunk. J'ai donc installé Wireshark qui lui voit bien les données Syslog et Netflow. J'ai regardé avec PowerShell si le port est bien ouvert et que c'est bien splunkd qui écoute (netstat -ano | findstr :2055) & (tasklist | findstr XXXX). J'ai également installé plusieurs add-on mais sans résultat concluant. Quelqu'un a déjà eu le problème ou aurait une piste de solution ? Merci d'avance
Hi    I have a kv store lookup which populated automatically and it contains arrays . How can make it like a normal lookup that is searchable  or how to make it as a proper file    current csv: ... See more...
Hi    I have a kv store lookup which populated automatically and it contains arrays . How can make it like a normal lookup that is searchable  or how to make it as a proper file    current csv:     I want the above kv store as a searchable lookup with proper segregation between each rows     
Just passed my first cert located in the DC suburbs, any market for a cleared individual in the area??
Hello, Thanks in advance for any help and Karma will be on the way :). So I'm trying to create a Table that uses a "Sum" field that would show how many "Create" events exist that doesn't have a "Cl... See more...
Hello, Thanks in advance for any help and Karma will be on the way :). So I'm trying to create a Table that uses a "Sum" field that would show how many "Create" events exist that doesn't have a "Close" event.   I'm doing this by using an eval IF statement The issue I am having is when using "Sum", I get no results for Sum when there are not any events.  But, if I use "Count", I always get "1" returned. Here's the Search I am using         index="healthcheck" integrationName="Opsgenie Edge Connector - Splunk", "alert.message"="[ThousandEyes] Alert for TMS Core Healthcheck", action IN ("Create","Close") | eval Create=IF(action=="Create",1,0) | eval Close=IF(action=="Close",1,0) | stats count(Create) as isCreate, count(Close) as isClose by alert.id | eval comparison=IF(isCreate>isClose,"1", "0") | stats sum("comparison") as Sum count("comparison") as Count | eval Application = "TMS_API" | eval test = Sum | eval test1 = Count | eval test2 = Application | eval "Monitor Details" = "Performs a Health Check " | table test, test1, test2 , "Monitor Details"           In the returned results, I get an empty "test" field and a "1" in test1 field. Thanks again for your help, and please let me know if more details are needed, this has been a huge headache for me. Thanks, Tom  
Hi everyone. I'm really new to Splunk, so I'm confused with what seems to be a simple problem.  I'm using "where row_num > 1" to remove the first row of my search, as I need to calculate lots of ... See more...
Hi everyone. I'm really new to Splunk, so I'm confused with what seems to be a simple problem.  I'm using "where row_num > 1" to remove the first row of my search, as I need to calculate lots of metrics based on the whole data without this specific first row. But I'm also supposed to show the value of this first row in a specific field.  My query is of the following structure: my_search... | eval val1=... | sort val1 | streamstats count as row_num | where row_num > 1 | stats avg(...) as metric1, max(...) as metric2, count(...) as metric3 ... | fields metric1, metric2, metric3 But I also need to output the value 'x' that is specifically on field 'y' on row 1.  How would I do this?  Thanks in advance  
HI  Can someone please let me know how to open different web URLs by clicking on different rows of a dashboard using drilldown option:  Example : Dashboard is using vlookup file  File.csv with... See more...
HI  Can someone please let me know how to open different web URLs by clicking on different rows of a dashboard using drilldown option:  Example : Dashboard is using vlookup file  File.csv with below 2 columns:  DESC1 , LINK1 DESC2 , LINK2 DESC3 , LINK3  I've used the below code , but it is taking me always to the same link even when i click on DESC1 or DESC2 or DESC3.  <row> <panel> <table> <search> <query>| inputlookup File.csv | fields * </query> <earliest>1722776400.000</earliest> <latest>1722865326.000</latest> <sampleRatio>1</sampleRatio> <done> <set token="schedule">$result.Schedule$</set> </done> </search> <drilldown> <link target="_blank">https://community.splunk.com/</link> </drilldown> </table> </panel> </row> Is it possible , then if i click  DESC1 , it will take me to the link  "https://community.splunk.com/t5/Dashboards-Visualizations"  DESC2 , it will take me to the link  "https://www.google.com/"  DESC3 , it will take me to the link  "https://blog.avotrix.com/embed-splunk-dashboard-into-external-website/?force_isolation=true"   
Hi Can Archived Apps be installed onto Splunk Cloud? For example, below there are 2 apps   “This app is archived” https://splunkbase.splunk.com/app/3120 60K downloads https://splunkbase.splunk... See more...
Hi Can Archived Apps be installed onto Splunk Cloud? For example, below there are 2 apps   “This app is archived” https://splunkbase.splunk.com/app/3120 60K downloads https://splunkbase.splunk.com/app/3119 30K downloads Archived – but not supported The apps have been moved to classic Splunk https://classic.splunkbase.splunk.com/app/3119/ https://classic.splunkbase.splunk.com/app/3120/ I don't have a cloud license, so I can't test this out. Does this mean I can't install them into Splunk Cloud? Cheers Robert 
Hi everyone I just started working with Splunk and I have a query in which one of the steps is to count the number of instances where a certain field has value > 10. But I have to count the number... See more...
Hi everyone I just started working with Splunk and I have a query in which one of the steps is to count the number of instances where a certain field has value > 10. But I have to count the number of instances with value > 10, > 15, > 30, > 60, > 120 and > 180. The way I'm doing it now is just by executing different counts, just as the following: <search>... | eval var1=... | stats count(eval(var1 > 10)) as count10, count(eval(var1 > 15)) as count15, count(eval(var1 > 30)) as count30, count(eval(var1 > 60)) as count60, count(eval(var1 > 120)) as count120, count(eval(var1 > 180)) as count180 ... But I'm aware this is definitely not the optimal way as, to my understanding, this will go through all the instances and count the ones > 10, then will go through all the instances again counting the ones > 15 and so on.  How would I execute this count making use of the fact that, e.g., to count the number of instances > 120, I can check only considering the set of instances > 60 and so on? That is, how do I chain these counts and use them as "filters"?  It's important to note that I don't want to use "where var1 > 10" multiple times as I also need to compute other metrics related to the whole dataset (e.g., avg(var1)) and, to my understanding, using just one  | stats count(eval(var > 10)) as count10 will "drop" all of the other columns of my query. Anyways, how would I do this? Thank you in advance.
Hello, I really appreciate any help on this one, I can't figure it out.  I am using the following to show only the "Create" events that don't have a corresponding "Close" event.   | transaction "al... See more...
Hello, I really appreciate any help on this one, I can't figure it out.  I am using the following to show only the "Create" events that don't have a corresponding "Close" event.   | transaction "alert.id", alert.message startswith=Create endswith=Close keepevicted=true | where closed_txn=0 This works, but, the search is running for "All Time", and we only keep events up to 1 yr.  I've ran into the issue of once one of the "Create" events reach that 1 yr and is deleted.  The "Close" event will make it appear in the Search results. I'm not sure why a "Close" event without a corresponding "Create" event would be counted, or how I can prevent if a single "Create" or "Close" event from being returned once one of the events have been deleted or is beyond the Search time frame selected. Any ideas on this one? Thanks for any help, you will save me some sleepless nights. Tom  
Good day everyone. I am trying to monitor the environment hosts whether if any stopped sending logs. The challenge here to make through content management > correlation search. So it can be schedu... See more...
Good day everyone. I am trying to monitor the environment hosts whether if any stopped sending logs. The challenge here to make through content management > correlation search. So it can be scheduled every ex: 2 hours. any idea?
Feb 3 11:10:15 server-server-server-server systemd[1]: Removed slice User Slice of UID 0. Feb 3 04:14:23 server-server-server-server rsyslogd[679024]: imjournal: 16021 messages lost due to rate-limi... See more...
Feb 3 11:10:15 server-server-server-server systemd[1]: Removed slice User Slice of UID 0. Feb 3 04:14:23 server-server-server-server rsyslogd[679024]: imjournal: 16021 messages lost due to rate-limiting (20000 allowed within 600 seconds) Feb 3 11:01:01 server-server-server-server CROND[3905399]: (root) CMDEND (run-parts /etc/cron.hourly) Feb 3 11:10:55 server-server-server-server esfdaemon[3938104]: 0 Feb 3 10:24:36 server-server-server-server auditd[2689]: Audit daemon rotating log files Is there a way to capture the whole line where systemd, rsyslogd and auditd keyword matches using props.conf and transforms.conf? Below captures till the specific keyword, how about remaining lines after the keyword? [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = ^\w{3}\s\s\d{1,2}\s\d{1,2}:\d{1,2}:\d{1,2}\s+(?:[+\-A-Z0-9]*\s+)?(systemd|rsyslogd|auditd) DEST_KEY = queue FORMAT = indexQueue  
I have a table with hundreds of thousands of rows which I am seeking to visualise in Splunk. The data is far too big for the Network Viz diagram to show without latency issues so I am seeking to gro... See more...
I have a table with hundreds of thousands of rows which I am seeking to visualise in Splunk. The data is far too big for the Network Viz diagram to show without latency issues so I am seeking to group it down and will chunk it further with filters. Users are mainly interested in the services we have and how they connect with other services through our assets. Details like server names are less important so those can be grouped. I am having issues with the default network viz diagram and the grouping behaviour.  Example Data: Parent Child Parent Class Child Class Service1 Service2 Service Service Server1 Server2 Server Server Server2 Server3 Server Server Service1 Server3 Service Server Service2 Server2 Service Server Service3 Server4 Service Server Service1 Database1 Service Database Service3 Database1 Service Database Service3 Database2 Service Database Server3 Network1 Server Network Server4 Network1 Server Network   Desired look below. Notice how there are multiple server groups rather than just one. We can clearly identify that service 1 and 2 are linked through servers. Service 3 is separate and is connected to a different group of servers. And here is my attempt at using the group functionality for the network viz diagram. I used the asset class to make colour groupings. Notice that the groups are just generic and cannot be named. All servers have been grouped together making it look like all 3 services are linked through servers.   Expanding this diagram clearly shows they are not linked through servers. Service 3 is connected to a different server. Is there a way to reach my desired grouping method with the default Splunk tools? Is there another add on I could utilise?
I have a requirement to create a dashboard with following Json data: all_request_headers: { [-]      Accept: */*      Content-Length: 0      Content-Type: text/plain      Cookie: Cookie1=Salvin ... See more...
I have a requirement to create a dashboard with following Json data: all_request_headers: { [-]      Accept: */*      Content-Length: 0      Content-Type: text/plain      Cookie: Cookie1=Salvin      Host: wasphictst-wdc.hc.cloud.uk.sony      User-Agent: insomnia/2021.5.3    }    all_response_headers: { [-]      Connection: keep-alive      Content-Length: 196      Content-Type: text/html; charset=iso-8859-1      Date: Fri, 14 Feb 2025 15:51:13 GMT      Server: Apache/2.4.37 (Red Hat Enterprise Linux)      Strict-Transport-Security: max-age=31536000; includeSubDomains    } waf_log: { [-]      allowlist_configured: false      allowlist_processed: false      application_rules_configured: false      application_rules_processed: false      latency_request_body_phase: 1544      latency_request_header_phase: 351      latency_response_body_phase: 15      latency_response_header_phase: 50      memory_allocated: 71496      omitted_app_rule_stats: { [+]      }      omitted_signature_stats: { [+]      }      psm_configured: false      psm_processed: false      rules_configured: true      rules_processed: true      status: PASSED    } Fields are getting auto extracted like waf_log.allowlist_configured ... etc. They want a neat dashboard for request headers, response headers, waf log details etc. How to create this dashboard. I am confused. If we create based on fields then there will be so many panels right.
We have two syslog standalone servers (both are active) (one named as primary and the other is contingency) with UF installed in it forwards data to Splunk. We have different indexes configured for t... See more...
We have two syslog standalone servers (both are active) (one named as primary and the other is contingency) with UF installed in it forwards data to Splunk. We have different indexes configured for these two servers.  Now the issue is same log is getting indexed into both servers which resulting in duplication of logs in Splunk.  Syslog 1 --- index = sony_a == Same log Syslog 2 --- index = sony_b == Same log When we search with index=sony* it is giving same logs for two indexes which is duplication. how to avoid two syslog servers from getting indexed same log twice? 
Hello, Hello, we are on ES 7.3.2. We are noticing there is difference in count of Notable alerts visible under "Incident Review" page versus to the number of events in the notable index for that sam... See more...
Hello, Hello, we are on ES 7.3.2. We are noticing there is difference in count of Notable alerts visible under "Incident Review" page versus to the number of events in the notable index for that same time period. For example, Our Incident Review page when filtered to show all notables for previous month' time range shows 4648 notable alerts generated. Screenshot attached. But, if check index=notable for previous months' time range, it shows 4653 events. Likewise, we are seeing this difference for every month. Ideally both numbers should match. How to find out what is causing this mismatch and what is the reason exactly?
Hi Splunkers, I'm testing with 2 separated splunk deployments, 1 is provider and 1 is local. I want to put lookup file/definition or a kvstore on the local, when make a search to the provider (via ... See more...
Hi Splunkers, I'm testing with 2 separated splunk deployments, 1 is provider and 1 is local. I want to put lookup file/definition or a kvstore on the local, when make a search to the provider (via standard mode or transparent) then `lookup` in the local LookupDefinition.  How can I do it? Or please some one explain me this context. I looked around about lookup command, federated.conf, transform.conf, distsearch.conf... My search like: ``` <base search> | fields srcip, dstip | lookup local=true serversList ip as srcip OUTPUTNEW serverName ```
I'm able to calculate the time difference between the start and end time of my job. I want to display the string value in bar chart how to achieve this. index=music Job=* | eval Duration=(end-start... See more...
I'm able to calculate the time difference between the start and end time of my job. I want to display the string value in bar chart how to achieve this. index=music Job=* | eval Duration=(end-start_time) | chart values(Duration) as Duration by "Start Time"