All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I currently have v8.5 of the Splunk_TA_Windows app, and the following stanza in inputs:   [WinEventLog://AD FS/Admin] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 rend... See more...
I currently have v8.5 of the Splunk_TA_Windows app, and the following stanza in inputs:   [WinEventLog://AD FS/Admin] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=false    And it seems not to be working. I am also monitoring the Application, Security, and System logs, and they are showing up. I don't see anything in the logs. What am I doing wrong?
Hello!  I am pulling in logs from a server, there are about 500 logs in the directory.  We want to bring in all 498 of them with a generic sourcetype and two need a specific log type.  Is it as easy ... See more...
Hello!  I am pulling in logs from a server, there are about 500 logs in the directory.  We want to bring in all 498 of them with a generic sourcetype and two need a specific log type.  Is it as easy as this:   [monitor://C:\Program Files\Logs\*] blacklist = log1:log2 disable=false index=logs sourcetype=logs   [monitor://C:\Program Files\Logs\*] whitelist = log1:log2 disable=false index=logs sourcetype=specific:logs
Hello, I have created a lookup definition for CIDR. The CIDR matching works just fine and I am able to whitelist the IPs in that particular subnet range. However, I wanted to know if I can add si... See more...
Hello, I have created a lookup definition for CIDR. The CIDR matching works just fine and I am able to whitelist the IPs in that particular subnet range. However, I wanted to know if I can add single IP's to the same lookup file/definition(CIDR lookup) as well? I want single IP matching in the same lookup table where I have added the IP subnet. How to proceed about this?
I'm trying to get auditd events into Splunk using the rlog.sh script from the Splunk Add-on for Unix and Linux. It isn't working. The audit logs are not being ingested. No errors are appearing in i... See more...
I'm trying to get auditd events into Splunk using the rlog.sh script from the Splunk Add-on for Unix and Linux. It isn't working. The audit logs are not being ingested. No errors are appearing in index=_internal for the host. It is successfully scheduled through the ExecProcessor component: 0400 INFO ExecProcessor [1975905 ExecProcessor] - New scheduled exec process: /opt/splunkforwarder/etc/apps/Splunk_TA_nix_l1_inputs/bin/rlog.sh To attempt to address the problem I have done the following: Had the host owner ensure dependent utilities are installed (listed in https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/Requirements#Dependencies). Had the host owner change the log_group from root to splunk in /etc/audit/auditd.conf (suggested in https://community.splunk.com/t5/All-Apps-and-Add-ons/Can-t-get-rlog-sh-to-run/m-p/76143). When executing rlog in debug mode (./rlog.sh --debug) we get the following output: As splunk user: Blank output As root user: Expected output Additional details: This host was recently rebuilt. Before the rebuild the audit logs on this host were ingesting successfully through the Add-On. Other scripts through the Add-On are working on this host. This problem has not materialized on any of our other hosts utilizing the Add-On. Thanks in advance for your input!
Received error this morning on one of our non-distributed search head: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch. Nothing works, cannot search, dashboar... See more...
Received error this morning on one of our non-distributed search head: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch. Nothing works, cannot search, dashboards are non-functional.   Searching produces this error: Search not executed: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch. user=admin., concurrency_category="historical", concurrency_context="user_instance-wide", current_concurrency=0, concurrency_limit=5000   I did quite a bit of digging in the community and found the following on my instances, non-distributed:   Dispatch Tried the clean-dispatch command on our bloated 8873 count in /opt/splunk/var/run/splunk/dispatch Shut down splunk even run in sudo, results in error of Permission denied Ran command:  ./splunk cmd splunkd clean-dispatch /temp -1day     bundle files distsearches.conf  has no maxbundlesize addressing the large .bundle files in /opt/splunk/var/run If I delete out the bundle files above, I can search for alittle bit on the search head, but then it craps out.   Now, I am at a loss after reading so many articles, how-tos and docs. I'm not a splunk guy, but I am trying to get this stable.
Hello All,   The log has empty space before and after equal with semicolon separation. I’m unable to get the table request status like index="gd" RequestStatus | table RequestStatus, _time     ... See more...
Hello All,   The log has empty space before and after equal with semicolon separation. I’m unable to get the table request status like index="gd" RequestStatus | table RequestStatus, _time                 Would you please advise if anyone have suggestions   Log sample {"timestamp":"2022-11-02 17:01:21,421+0000","level":"INFO","location":"request_process:171","message":"request_id = 5ac3565f-d964-31cd-90b1-e8b7b208e7df; RequestStatus = Completed; RequestID = 5ac3565f-d9a64-31cd-9021-e8b7b208e7df--70ivkG0Td8OBpvWk; S3SourceKey = 1049x7555.xml ; "function_request_id":"b61aa34-f22b-53bc-957e-142456b9b7a5","xray_id":"1-6482a25d-78459fbe07213ee14x4386bd"}   RequestStatus = Received RequestStatus = Completed RequestStatus = Error
I'm struggling to create a sankey diagram that take an initial username and connects that user to IP addresses that are associated with that username. Then, take that IP address and see what other us... See more...
I'm struggling to create a sankey diagram that take an initial username and connects that user to IP addresses that are associated with that username. Then, take that IP address and see what other usernames might be associated with that IP address. My initial search gets a list of IP addresses that is associated with a username. This works well, then I do the stats on those results and it looks great with sankey,       | stats count by username IP | rename username AS user IP AS address           The problem comes with I try to append the second level of the sankey. I'm not quite sure how to take the address on the far right and create that second level, looking for associated usernames. My intention is to only go 3 levels. I assume I have to search by 'address' in my dataset to see what username is associated?
I need to compare two fields "Name" and "StudentName" and I am having problems with this, the values in the field "Name" do not contain accents but the values in "StudentName" contain accents in the ... See more...
I need to compare two fields "Name" and "StudentName" and I am having problems with this, the values in the field "Name" do not contain accents but the values in "StudentName" contain accents in the names like 'Róbert' or 'Czuukó' and also names like 'Mary-Ann' so when I try to compare it will give me several matches because most of the names don't have accents but the ones that contain an accent in"Name" won't show up as matching with "StudentName"
I'ved been having issues with getting "CPU utilization" to up on the Windows infrastructure dashboard.  I found that when i click on the Windows entities and move onto a single windows machine itself... See more...
I'ved been having issues with getting "CPU utilization" to up on the Windows infrastructure dashboard.  I found that when i click on the Windows entities and move onto a single windows machine itself i see all pertinent data for CPU utilization, but for some reason i cannot get it to show on the dashboards as a graph.    I have it set as the key indicator on the overview dashboard, it shows nothing, whereas the other key values show data (memory, network, and disk utilization). key info - 4 windows hosts (all same issue, shows N/A for CPU utilization) - i have manipulated the search job schedule  - entity discovery search is enabled, and manipulated the savedsearches.conf, gave it more time. - set correct Index in macros in SA-ITOA - checked on the _meta field in the windows stanza and the entity_type::windows_host is all there.  - perfmon::CPU is all there. So its weird why i am getting N/A for cpu utilization on the windows entity overview page and the infrastructure overview dashboard.  Any ideas would be greatly appreciated.   
help ! 
I have been experiencing issues with getting the Splunk Universal Forwarder agent installed on AIX 7.1 and 7.2 servers. The issue I am having is that after installation of the Splunk UF agent and sta... See more...
I have been experiencing issues with getting the Splunk Universal Forwarder agent installed on AIX 7.1 and 7.2 servers. The issue I am having is that after installation of the Splunk UF agent and starting the "splunkd" daemon, it runs for a few seconds and then dies (see details below). Has anyone had this issue? And if so, can you provide insight, or a resolution?   root@PA-CLMLD001:/: root@PA-CLMLD001:/: ps -ef | grep splunkd     root  7733308 18350184   0 10:48:09  pts/0  0:00 grep splunkd root@PA-CLMLD001:/: /usr/bin/startsrc -s splunkd 0513-059 The splunkd Subsystem has been started. Subsystem PID is 7405708. root@PA-CLMLD001:/: ps -ef | grep splunkd     root  7405712  2752706 103 10:51:31      -  0:01 splunkd --nodaemon -p 8089 _internal_exec_splunkd     root 11403272 18350184   0 10:51:35  pts/0  0:00 grep splunkd     root 22544524  7405712   0 10:51:33      -  0:00 [splunkd pid=7405712] splunkd --nodaemon -p 8089 _internal_exec_splunkd [process-runner] root@PA-CLMLD001:/: ps -ef | grep splunkd     root  7733300 18350184   0 10:51:46  pts/0  0:00 grep splunkd root@PA-CLMLD001:/:     Thanks, Mel
Hi, I searched a lot and found no answer. I have data with the above timestamp and I want to convert it into local time. extract="year, month, day, hour, minute, second, zone" with  (\d{4})-(\d... See more...
Hi, I searched a lot and found no answer. I have data with the above timestamp and I want to convert it into local time. extract="year, month, day, hour, minute, second, zone" with  (\d{4})-(\d{2})-(\d{2})T(\d{2}):(\d{2}):(\d{2})(\S+)\s+ works OK when time zone is given in the form of "+0000", yet not with "UTC". Is there something like "litzone" available? Thanks in advance Volkmar
Hi Splunkers, I'm searching about the best way to send Mulesoft logs and events. Here on community I found What is the best way to integrate Mulesoft with Splunk cloud? that states, in a nutshell,... See more...
Hi Splunkers, I'm searching about the best way to send Mulesoft logs and events. Here on community I found What is the best way to integrate Mulesoft with Splunk cloud? that states, in a nutshell, to follow this approach.  It is clear enough how to implement it; my doubt is not related so to the procedure, but to another point. The above link show, let's say, a direct forwarding from Mulesoft to Splunk Indexer/environment.  What about if I plan to put a HF between Mulesoft and the indexers?  I mean: I have to follow the same procedure, simply creating the token on my HF and then, once data arrived from Mulesoft, forward them to Indexer by the usual way? Or there are some change I have to perform? Note: I supposed, as intermediate host, a HF for the token required generation. I supposed I cannot generate one on a UF. Feel free to correct me if I'm wrong.
Hi, Splunkers,    I have the following panel in my dashboard,  I need some different drilldown for the following 3 table columns: a b c 1234 abcd xyz   when I click 1234 (column a),... See more...
Hi, Splunkers,    I have the following panel in my dashboard,  I need some different drilldown for the following 3 table columns: a b c 1234 abcd xyz   when I click 1234 (column a),  I expect using 1234 as input to open another panel in same dashboard. when I click abcd  or xyz (column b or c ) , I expect using them as input to open different dashboard accordingly. how to code this condition in drilldown section ?    thx in advance.   Kevin
Hello, We have developed a dashboard to monitor the source of attacks. The dashboard works fine, however, referring to the image on the left, when I hover over the indicator, it displays the count... See more...
Hello, We have developed a dashboard to monitor the source of attacks. The dashboard works fine, however, referring to the image on the left, when I hover over the indicator, it displays the count. How can I modify the search to capture the count as displayed on right? Below is my query. index="qradar_offenses" | spath | iplocation src | geostats count by src Thanks in advance
Been tasked with deploying a highly available and scalable setup of Splunk in AWS. I've looked briefly at two methods currently, which is deploying a clustered approach to search heads and indexers... See more...
Been tasked with deploying a highly available and scalable setup of Splunk in AWS. I've looked briefly at two methods currently, which is deploying a clustered approach to search heads and indexers on EC2 instances and also looked at using the Kubernetes Operator for Splunk to achieve the same. The questions I have regarding this are below...   Does anyone have experience with this and what deployment method would you recommend? With your recommended approach can you autoscale components or does this need to be scaled manually? Best way to get data into Splunk with enterprise?   Would really appreciate any advice you could offer! Thank you.
I've tried several times now but I can't get Splunk Enterprise to install on my Windows 10. I even tried an older version with no success. 
Hi! I have a set of python scripts running every night as part of an automation process.  These scripts output data into the _internal index that is restricted for a large portion of the work for... See more...
Hi! I have a set of python scripts running every night as part of an automation process.  These scripts output data into the _internal index that is restricted for a large portion of the work force.  However some teams should be able to view the output from these scripts as it would help them troubleshoot why hosts fail.  I was hoping to solve the access issue by having a scheduled search run every night collecting the output of the python scripts and using that report as a base search in a new dashboard powered by dashboard studios.  I would then just use the chain search function for creating the tables and reports that staff with less privileges could  use to access the otherwise unavailable information.  But my problem is that I can not get the base search to work.  Not matter what I put under the key  App: "Put the name of the app holding the saved search here" When I use the "open in search function" from the dashboard, I notice that whatever I put under the context of "app", splunk does not care, it always just resolves to "undefined". Here is an example of the code from dashboard studios:  { "type": "ds.savedSearch", "options": { "ref": "internal_base_search", #this should be the name of the report "app": "search" #The app where the report exist. }, "name": "Saved Search Data Source From S&R" } So no matter what I do, the url I get using the "open in search" function, it points to:  <hostname>:8000/en-US/app/undefined/search?s=Internal%20base%20search%20Clone and if I manually change it to the app that holds the report it finds it just fine, for example:  <hostname>:8000/en-US/app/search/search?s=Internal%20base%20search%20Clone Has anyone had a simlar issue?  This to me feels like a bug and that "APP" key in the json formatted source code does not resolve correctly.
Our splunk system has never had usage above 6GB for a 10 GB license.  Two weeks ago, the usage jumped to 30GB and on weekends jumps to almost 100GB.  We are locked from searches of course.  Nothing i... See more...
Our splunk system has never had usage above 6GB for a 10 GB license.  Two weeks ago, the usage jumped to 30GB and on weekends jumps to almost 100GB.  We are locked from searches of course.  Nothing in the system has changed as far as I am aware.  I can pull up a usage report for 30 days and it shows that all this usage is being recorded in the default / main index.  The histogram looks the same for each week, about 30GB per day and over twice that on weekends since this issue has started.  What could cause this?  How can I detect what is sending to default / main index when I can't do any searches? The deployment hasn't changed since 2019.  There are about 30 servers with forwarders.
I am working on an app by using the splunk python SDK and trying to generate logs using logging library. I used all logging levels like: info, warning, error, debug. (ex: logging.error("Checking ... See more...
I am working on an app by using the splunk python SDK and trying to generate logs using logging library. I used all logging levels like: info, warning, error, debug. (ex: logging.error("Checking error")) But no logs are generating in any of the file. can anyone please help me on the above issue?