All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How to calculate the number of times the same event has occured in an index
Hello fellow splunkers, I'm looking to update all the Splunk icons within the user interface with custom icons. So far I successfully updated the one on the login screen. However, I cannot update t... See more...
Hello fellow splunkers, I'm looking to update all the Splunk icons within the user interface with custom icons. So far I successfully updated the one on the login screen. However, I cannot update the icon in the top left (which also acts like the home button). Alternatively, I'm looking to hide the status bar on the (which has the above mentioned icon) for a subset of users. Any hints on  how to achieve this? Thank you!
Hi, I'm pretty new to splunk and I have a question. I am trying to send information from one index to another with the "collect" command. The problem is that when I send the events to the n... See more...
Hi, I'm pretty new to splunk and I have a question. I am trying to send information from one index to another with the "collect" command. The problem is that when I send the events to the new index the field and value do not appear as in the old index (they disappear). I am using this sentence:   index = legacy sourcetype = old_legacy | collect index= mew_legacy     But in the new index i'm not receiving the FIELD->VALUE .
Please help in reading the credentials from password.conf in python script.   
Guys its my first time here, i need to read the logs from my pfsense and get alerts based on the logs, any help on how i can achieve this? welcome and thanks in advance.
hai all, can you suggest is there anyway to ingest blookberg application data into splunk  
Hello,  please can someone assist with creating syntax to 1. know the numbers of desktop, laptops, servers and network devices that I have onboarded into Splunk cloud? 2. Create alert if a new ... See more...
Hello,  please can someone assist with creating syntax to 1. know the numbers of desktop, laptops, servers and network devices that I have onboarded into Splunk cloud? 2. Create alert if a new device is onboarded?  3. Count the numbers of the different types of devices that have been onboarded  4. create a table for the above     thanks  
i have  events for port listening on 443 how can i create search and alert if port was down or not liseting below are the same events   10/10/22 10:35:40.312 AM 2022-10-10 11:35:40.312 tran... See more...
i have  events for port listening on 443 how can i create search and alert if port was down or not liseting below are the same events   10/10/22 10:35:40.312 AM 2022-10-10 11:35:40.312 transport=TCP dest_ip=[::] dest_port=443 pid=4 appname=System host = GBLONICORE01Vsource = C:\Program Files\SplunkUniversalForwarder\etc\apps\Splunk_TA_windows_850\bin\win_listening_ports.batsourcetype = Script:ListeningPorts 10/10/22 10:35:40.312 AM 2022-10-10 11:35:40.312 transport=TCP dest_ip=0.0.0.0 dest_port=443 pid=4 appname=System host = GBLONICORE01Vsource = C:\Program Files\SplunkUniversalForwarder\etc\apps\Splunk_TA_windows_850\bin\win_listening_ports.batsourcetype = Script:ListeningPorts 10/10/22 9:35:40.006 AM 2022-10-10 10:35:40.006 transport=TCP dest_ip=[::] dest_port=443 pid=4 appname=System host = GBLONICORE01Vsource = C:\Program Files\SplunkUniversalForwarder\etc\apps\Splunk_TA_windows_850\bin\win_listening_ports.batsourcetype = Script:ListeningPorts 10/10/22 9:35:40.006 AM 2022-10-10 10:35:40.006 transport=TCP dest_ip=0.0.0.0 dest_port=443 pid=4 appname=System host = GBLONICORE01Vsource = C:\Program Files\SplunkUniversalForwarder\etc\apps\Splunk_TA_windows_850\bin\win_listening_ports.batsourcetype = Script:ListeningPorts  
hi... how to do splunk integration with windows (which uses universal forwarder agent), so that it appears in the apps section of splunk enterprise security, because so far I can only integrate spl... See more...
hi... how to do splunk integration with windows (which uses universal forwarder agent), so that it appears in the apps section of splunk enterprise security, because so far I can only integrate splunk with windows without involving splunk enterprise security apps, so I want to do splunk enterprise security integration with windows, and I will do a bruteforce test on those windows Thank you
Hello All,   We are currently getting data from an application into these 5 indexes(index1, index2, index3, index4, index5.. )  from different locations around the world.  And I want to try and ... See more...
Hello All,   We are currently getting data from an application into these 5 indexes(index1, index2, index3, index4, index5.. )  from different locations around the world.  And I want to try and create a new index called "index_global" and point all these 5 indexes to this global index so that all the data can be available under a single index.  Hope this makes sense.  I would really like to understand, how i can achieve this. Any help on this would be really appreciated.    Thanks and cheers. 
Hi All, This is more a general inquiry I noticed that the _audit index collects a lot of activity, but it's not really telling in detail what actually has been done (if anything at all) .. edit u... See more...
Hi All, This is more a general inquiry I noticed that the _audit index collects a lot of activity, but it's not really telling in detail what actually has been done (if anything at all) .. edit user / edit role / edit index / remove ... What would be the recommended Log Levels for the different Audit Log channels? If  I would like to see in details what has been changed for a certain index, what Log channel(s) and what Log Level(s) would result in showing that information? Note, that in our environment any changes to indexes are done  in the (Linux) server directly, not using the UI Thanks in advance! Edwin          
single column join is working     index=* source=jar columns.path="*/log4j-core*" NOT columns.path=*/log4j*2.17* host IN (*.test.com) | rename columns.pid AS pid, columns.pid_ts as pid_ts, col... See more...
single column join is working     index=* source=jar columns.path="*/log4j-core*" NOT columns.path=*/log4j*2.17* host IN (*.test.com) | rename columns.pid AS pid, columns.pid_ts as pid_ts, columns.path as path, | dedup host path pid | join pid type=left max=1 [search index=* source=process host IN (*.test.com) earliest=-25h latest=now | rename columns.pid AS pid, columns.cmdline as cmd, columns.username as user, columns.uid as uid, columns.groupname as group, columns.gid as gid | dedup host pid] | table host, path, pid, user, uid, group, gid, cmd     but multi column join is not working     index=* source=jar columns.path="*/log4j-core*" NOT columns.path=*/log4j*2.17* host IN (*.test.com) | rename columns.pid AS pid, columns.pid_ts as pid_ts, columns.path as path, | dedup host path pid | join host,pid type=left max=1 [search index=* source=process host IN (*.test.com) earliest=-25h latest=now | rename columns.pid AS pid, columns.cmdline as cmd, columns.username as user, columns.uid as uid, columns.groupname as group, columns.gid as gid | dedup host pid] | table host, path, pid, user, uid, group, gid, cmd       Splunk Enterprise Version:8.2.6Build:a6fe1ee8894b
Unable to find sourcetype="ms365:defender:incident:alerts" can u pls help 
Hey Guys, I have the following Event Data (Picture 1) that come into splunk via universal forwarder. I managed it to generate a table out of it (by using different operations like a delimiter by /n a... See more...
Hey Guys, I have the following Event Data (Picture 1) that come into splunk via universal forwarder. I managed it to generate a table out of it (by using different operations like a delimiter by /n and the statements of picture 2). My Question is: Can I define permanent interpreting rules for specific directories, so that the statements of picture 2 and the delimiter by /n are permanently apllicated for some specific directories?:)   Sincreley Leon    
Hi, I need your help i have a lookup table as vcs_ip.csv. inside the table, i have a column named as ip. This table is for all the allowed traffic. How to i construct a query to search for Dst_... See more...
Hi, I need your help i have a lookup table as vcs_ip.csv. inside the table, i have a column named as ip. This table is for all the allowed traffic. How to i construct a query to search for Dst_ip and Src_ip NOT found in the vcs_ip.csv under ip column
Hello The dates I have are in form of Week Starting: for example WeekStarting = 04/04/2022 , 11/04/2022 and so on. I am unable to group data where business now requires to see 3 months rolling avg ... See more...
Hello The dates I have are in form of Week Starting: for example WeekStarting = 04/04/2022 , 11/04/2022 and so on. I am unable to group data where business now requires to see 3 months rolling avg figures for the last 2 years. How can I achieve this?  My search: index=AB, source=AB | search (WeekStarting="2021*" OR WeekStarting="2022*") | chart avg(DeviceCount) by WeekStarting It should be visualized as a 3 month data analysis I am also trying to use timewrap  span =1 month by Device count but no statistics appear!! Please help
I don't have Enterprise Security FYI... Just Enterprise Search.  Appreciate your assistance in this matter...   Thanks
In the email alert configuration, i want to make certain texts in Bold and add hyper links on text message, instead of placing the links. I am choosing HTML & Plain Text option, but it won't take the... See more...
In the email alert configuration, i want to make certain texts in Bold and add hyper links on text message, instead of placing the links. I am choosing HTML & Plain Text option, but it won't take the tags and displays tags as it is in the email.  Thanks in ad
Hi AppDynamics My problem is that I cannot push traces through OpenTelemetry into AppDynamics. I was trying to use two different collectors, but error message was the same.  My license is Lite. ... See more...
Hi AppDynamics My problem is that I cannot push traces through OpenTelemetry into AppDynamics. I was trying to use two different collectors, but error message was the same.  My license is Lite. Errors for the specific collectors: docker image collector: otel/opentelemetry-collector-contrib-dev:latest otel-collector      | 2022-10-09T22:08:20.436Z  info    exporterhelper/queued_retry.go:215       Exporting failed. Will retry the request after interval.         {"kind": "exporter", "name": "otlphttp", "error": "error exporting items, request to https://fra-sls-agent-api.saas.appdynamics.com/v1/traces responded with HTTP Status Code 403", "interval": "44.029649035s"} docker image collector: appdynamics/appd-oc-otel-collector otel-collector      | 2022-10-09T22:24:01.307Z  info    exporterhelper/queued_retry.go:231       Exporting failed. Will retry the request after interval.         {"kind": "exporter", "name": "otlphttp", "error": "error exporting items, request to https://fra-sls-agent-api.saas.appdynamics.com/v1/traces responded with HTTP Status Code 403", "interval": "4.688570936s"}     Collector settings: My server lives on DigitalOcean, Amsterdam region. Do I need to move it to Frankfurt? Thanks