All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Has anyone used SimData for threat and vulnerability data generation? Is there a template available somewhere? Thanks.
Hi all, I'm at a bit of an impasse.  An executive would like to see colors that make sense to him in my Punchcard visualization of the number of WiFi devices in a particular space.  My data looks l... See more...
Hi all, I'm at a bit of an impasse.  An executive would like to see colors that make sense to him in my Punchcard visualization of the number of WiFi devices in a particular space.  My data looks like this: date_hour Location Capacity CapacityColor 0 Art Museum Staff 10 5 0 Lobby 3 5 1 Art Museum Staff 10 5 1 Lobby 5 5 10 Art Museum Staff 31 4 10 Lobby 90 2 11 Art Museum Staff 34 4   I have fiddled with all manner of "CapacityColor", charting options, even the field options I found at: https://docs.splunk.com/Documentation/Splunk/9.1.0/DashStudio/objOptRef  I've tried my search in both Dashboard Studio and Classic, though I'll be honest, I prefer Classic.  The best I seem to be able to do is using Sequential and setting max/min to like red and green.  Very "autumn" palette of 5 colors comes out, but I can't change the legend.  If I set the CapacityColor to match "Capacity" based on some thresholds like 90,75,50,10,0 it picks seemingly random numbers for the legend (which will confuse said executive). I wanted to be able to use fieldColors={"Over 90%": "red" ... } like I've seen with other charting options, but I haven't found an iteration of that which works either in the punchcard visualization. Has anyone found a way to modify the colors?
Is it possible to set up the VSCode extension to connect to multiple instances?
Actually I have an on-prem instance of Splunk Enterprise installed locally, but for incident response we need to forward specific indexes logs to Splunk Cloud, I've been reviewing the "Distributed Se... See more...
Actually I have an on-prem instance of Splunk Enterprise installed locally, but for incident response we need to forward specific indexes logs to Splunk Cloud, I've been reviewing the "Distributed Search" option but if I'm not mistaken it takes all the SE data. Is there any way to perform this activity?
I know queue backlog troubleshooting questions are very common but I'm stumped here. I have 2 Universal Forwarders forwarding locally monitored log files (populated by syslog-ng forwarding) over T... See more...
I know queue backlog troubleshooting questions are very common but I'm stumped here. I have 2 Universal Forwarders forwarding locally monitored log files (populated by syslog-ng forwarding) over TCP to 4 load-balanced Heavy Forwarders, which then send them to a cluster of 8 indexers. These Universal Forwarders are processing a lot of data, approximately 500 MB per minute each, but this setup worked without lag or dropped logs up until recently. The disk IO and network speeds should be easily able to handle this volume. However, recently, the Universal Forwarders have stopped forwarding logs and the only WARN/ERROR logs in splunk.d are as follows: 08-25-2023 14:40:09.249 -0400 WARN TailReader [25677 tailreader0] - Could not send data to output queue (parsingQueue), retrying... And then, generally some seconds later: 08-25-2023 14:41:04.250 -0400 INFO TailReader [25677 tailreader0] - ...continuing. My question is this: assuming there's no bottleneck in the TCP output to the 4 HFs, what "parsing" exactly is being done on these logs that would cause the parsingQueue to fill up? I've looked at just about every UF parsingQueue related question on Splunk to find an answer, and I've addressed some common gotchas: - maxKBps in the [thruput] stanza in limits.conf is set to 0 (unlimited thruput) - 2 parallel parsing queues, each with 1 GB of storage (higher than recommended but I was desperate) - no INDEXED_EXTRACTIONS for anything but Splunk's preconfigured internal logs I've taken some other steps as well: - set up logrotate on the monitored files to rotate any file that gets larger than 1GB, so Splunk isn't monitoring exceptionally large files - set "DATETIME_CONFIG = NONE" and "SHOULD_LINEMERGE = false" for all non-internal sourcetypes I don't understand why the parsingQueue would fill up when it doesn't seem like there's any parsing actions configured on this machine! Is anyone able to advise me on what to look for or change to resolve this? Thanks much.
Hi Team, We have 2 Heavy Forwarder servers in our environment (A & B) and in both the HF servers we have installed the Add-on "Splunk Add-on for Microsoft Cloud Services" and they were running with ... See more...
Hi Team, We have 2 Heavy Forwarder servers in our environment (A & B) and in both the HF servers we have installed the Add-on "Splunk Add-on for Microsoft Cloud Services" and they were running with version 4.3.3 We have configured the inputs (totally 90+ inputs) which is fetching the data from Azure Storage Table and Blob and it was enabled (active) state in (A) HF server whereas in (B) server we have done all the configurations similar like (A) server but we have disabled the inputs. So logs would be getting ingested from this add-on only from (A) server. So recently we have upgraded our Splunk Add-on for Microsoft Cloud Services from 4.3.3 to 5.1.0 in our HF server (B) server which is in disabled state. So post upgrade in (B) server I have enabled all the 92 inputs to check the log flow from (B) HF server and later I have done a comparison with (A) and observed the behavior of the log flow in both server and came to know there is a difference. i.e. When I search the data for last 60 minutes, with host from (B) (Add-on upgraded in this server to latest version) i can see around 221738 events whereas when I searched the data with (A) server for the same last 60 minutes i can see only 110531 events. I don't know where is the gap and why there is a huge difference in it. So kindly help on how to check and get it fix the same.  
 Hi Team, Actually we have 2 HF servers (A & B) in our environment  and in both the HF servers we have installed the Add-on "Splunk Add-on for Microsoft Office 365" and they were running with vers... See more...
 Hi Team, Actually we have 2 HF servers (A & B) in our environment  and in both the HF servers we have installed the Add-on "Splunk Add-on for Microsoft Office 365" and they were running with version 4.0.0 The input was enabled (active) state in (A) HF server whereas in (B) server we have done all the configurations but we have disabled the inputs. So logs would be getting ingested from this add-on from (A) server. So recently we have upgraded our Splunk Add-on for Microsoft Office 365 from 4.0.0 to 4.3.0 in our HF server (B) server which is in disabled state. So post upgrade I have enabled the inputs to check the log flow from (B) HF server and done a comparison with (A) server  and observed there is a drastic behavior of the log flow in both the servers and came to know there is a difference. i.e. (A) HF server running with the older version of the add-on seems to be having less events for last 30 minutes whereas when we ran the same query where the add-on is upgraded in (B) server I can see huge events.  So I have monitored the status for couple of days but still the same not sure whether the newly upgraded ones showing duplicate events or not and hence kindly help on how to get it fixed.   FYI. We have the same configuration on both the HF servers (Inputs and the rest).   So kindly help to check and update please.   
I just started rolling out universal forwarder 9.1.0.1 on a few machines. To my horror i noticed that splunk again made a significant change in a minor release. The forwarder is now owner by user "sp... See more...
I just started rolling out universal forwarder 9.1.0.1 on a few machines. To my horror i noticed that splunk again made a significant change in a minor release. The forwarder is now owner by user "splunkfwd" instead of "splunk". I can only see this change in https://docs.splunk.com/Documentation/Forwarder/9.1.0/Forwarder/Installanixuniversalforwarder#Install_the_universal_forwarder_on_Linux There are no other mention or warning about this. Am I  the only one who needs to change a significant amount of automation/installation scripts for this change?  I know tarball is one workaround, but really?
Hi Splunk Experts, I've a big list of rex commands in my search query. While using dashboard I added those rex commands in a token and used it in search panels because I've 3 to 4 panels, I don't wa... See more...
Hi Splunk Experts, I've a big list of rex commands in my search query. While using dashboard I added those rex commands in a token and used it in search panels because I've 3 to 4 panels, I don't want to re-write the same set of rex command again & again. But now I want to add the search to a Scheduled report. How can I achieve the same behavior in the Scheduled report. Please shred some lights. Thanks in advance!!
Hello,   Current SQL Query: SELECT * FROM "vxex"."dbaex"."Example_Log_View" WHERE [Event Time] > ? ORDER BY [Event Time] ASC I am having some issues with log duplication on one of my inputs...... See more...
Hello,   Current SQL Query: SELECT * FROM "vxex"."dbaex"."Example_Log_View" WHERE [Event Time] > ? ORDER BY [Event Time] ASC I am having some issues with log duplication on one of my inputs... The SQL Query I have executes, but It gives me a warning on Step 4 within my Rising Input Option:   I am not too versed in Splunk DB Connect.. I believe it may be the space in the Event Time Column name.. But put [] around it to mitigate that.. As it still runs to step 5.     What I do see is the Checkpoint Value of 01/01/1970... And here are some Examples of the Event Time that comes through from the SQL Query: 2023-04-20 10:02:30.87 2023-04-20 10:03:23.783   Any thoughts on how to troubleshoot this issue to resolve step 4 to run properly?        
I thought this would be easy but i'm struggling.  I have a CSV of firewall rules from yesterday, and a CSV of Firewall rules from today.  They're being ingested into splunk but i figured the easiest ... See more...
I thought this would be easy but i'm struggling.  I have a CSV of firewall rules from yesterday, and a CSV of Firewall rules from today.  They're being ingested into splunk but i figured the easiest way to compare the two would be to make them lookup tables.  But I can't figure how to compare ALL values?  This is to audit for any changes.   The Data is kind of like this: Yesterdays Rules: Name Action SrcIp DestIp Port WebServer Allow 192.168.1.2 192.168.0.3 80 Application Deny 192.168.1.10 192.168.0.11 1020 Outbound Allow 192.168.0.0/24 * 80   Todays Rules Name Action SrcIp DestIp Port   WebServer Allow 192.168.1.2 192.168.0.3,192.168.0.4 80   Application Deny 192.168.1.10 192.168.0.11 1020   Outbound Allow 192.168.0.0/24 * 80     In the example the Webserver can now access an additional server.  But in reality any value could change and I need to alert on it. I basically just want to do a diff.  A little surprised its difficult to do in splunk.
Hi  there, I hope Lily Lee see this post , who is the developer of .conf Archive Search App(https://splunkbase.splunk.com/app/3330). I would like you to add .conf23 data. Does anyone know that... See more...
Hi  there, I hope Lily Lee see this post , who is the developer of .conf Archive Search App(https://splunkbase.splunk.com/app/3330). I would like you to add .conf23 data. Does anyone know that he has a plan to do it ? Best regards,
Can you please let me know what is the max data ingestion limit when we use hec service?
Hi there, Our system administration wanted something from Blue Team. They want to view root privilege users except root user. They want to see [0:0] UID users except root users. Is it possible to se... See more...
Hi there, Our system administration wanted something from Blue Team. They want to view root privilege users except root user. They want to see [0:0] UID users except root users. Is it possible to see who have [0:0] UID? root:x:0:0:root:/root:/bin/bash I'm confused in this job. Any help would be appreciated! Kind Regards.
Hi All, We are currently in learning phase of Splunk and wants to integrate Splunk with ServiceNow using Splunk Integration App in ]ServiceNow Store to create incident ticket in ServiceNow. Could... See more...
Hi All, We are currently in learning phase of Splunk and wants to integrate Splunk with ServiceNow using Splunk Integration App in ]ServiceNow Store to create incident ticket in ServiceNow. Could anyone help or have any suggestions how to perform this? Also, do we need to install Add-On for ServiceNow App from Splunkbase in our Splunk Enterprise? 
Hi Splunkers, today I noted a behavior I don't understand and I'm here to ask you help me. On a customer environment, we have some data (Forcepoint one) that reach a HF (so, a Splunk enterprise inst... See more...
Hi Splunkers, today I noted a behavior I don't understand and I'm here to ask you help me. On a customer environment, we have some data (Forcepoint one) that reach a HF (so, a Splunk enterprise instance) and then are sent to a Splunk Cloud. The collection method use a script that pulls logs and save them on a path, so the related data input is a Monitor one, which consinuosly monitor data pulled. It's look likes this in Monitor Input list:   For parsing, we are usin a sourcetype that seems to not work properly, because is not configured to extract csv data (logs are collected by script as csv files). So, to avoid to change it in a testing phase, we: configured another sourcetype Configured another index In custom sourcetype  properties, we set the Indexed Extraction to csv Finally, we associated the new sourcetype and new index to monitor input. The thing that happen and I don't understand is this one: if we perform this change, all .csv logs file starts to be listed in monitor input list and the associated app becomes splunk_instrumentation:   In above screen I captured only last csv files is present, but if I leave this configuration many other will follow (all the one captured by script). Why this happen? It's related to Indexed Extraction properties of custom sourcetype?
Hi, I have an index that returns logging events in JSON format. I want to create a tabular dashboard which will dynamically update the JSON key-value pairs in rows and columns for visualization purp... See more...
Hi, I have an index that returns logging events in JSON format. I want to create a tabular dashboard which will dynamically update the JSON key-value pairs in rows and columns for visualization purposes. Any help would be highly appreciated. index=log-1696-nonprod-c laas_appId=tsproid_qa.sytsTaskRunner laas_logId=CC6F5AA6-3813-11EE-AD8F-237241A57196 Event(the values change and it's dynamic):  "groupByAction": "[{\"totalCount\": 40591, \"action\": \"update_statistics table\"}, {\"totalCount\": 33724, \"action\": \"reorg index\"}, {\"totalCount\": 22015, \"action\": \"job report\"}, {\"totalCount\": 10236, \"action\": \"reorg table\"}, {\"totalCount\": 7389, \"action\": \"truncate table\"}, {\"totalCount\": 3291, \"action\": \"defrag table\"}, {\"totalCount\": 2291, \"action\": \"sp_recompile table\"}, {\"totalCount\": 2172, \"action\": \"add range partitions\"}, {\"totalCount\": 2088, \"action\": \"update_statistics index\"}, {\"totalCount\": 2069, \"action\": \"drop range partitions\"}]"   Table should have "totalCount" and "action" as columns  
I want to extract numeric values into seperate field "combinedrules": ["3000039", "3000081", "958052", "973335", "XSS-ANOMALY"]   Expected Output: Ruleid 3000039 3000081 958052   Th... See more...
I want to extract numeric values into seperate field "combinedrules": ["3000039", "3000081", "958052", "973335", "XSS-ANOMALY"]   Expected Output: Ruleid 3000039 3000081 958052   Three might be a case when there could be 2 rules Id in one event and i wan to see both gets displayed in a  single line
Hi, I am deploying sysmon all acrros our company but for some reason the sysmon events are not getting indexed Our deployment is the following: Splunk 9.0.5 running on Windows server sysmon inde... See more...
Hi, I am deploying sysmon all acrros our company but for some reason the sysmon events are not getting indexed Our deployment is the following: Splunk 9.0.5 running on Windows server sysmon index created manually in Splunk. inbound firewall rules created allowing traffic TCP in port 9997 Sysmon TA installed in the server in C:\Program Files\Splunk\etc\deployment-apps\Splunk_TA_microsoft_sysmon default/input.cont enabled (by default)   [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = false renderXml = 1 source = XmlWinEventLog:Microsoft-Windows-Sysmon/Operational ​   local/input.conf containing    [WinEventLog://Microsoft-Windows-Sysmon/Operational] index=sysmon​   Splunk Universal forwarder 9.1.0 deployed in all hosts All UF are reporting correctly to Splunk confirmed that sysmon TA is present in all hosts , deployed via forwarder management using a server class /etc/system/default/ouputs.conf is pointing to the right splunk server   [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = xxxxxx:9997 [tcpout-server://xxxxxx:9997]​   Sysmon 15 deployed in all hosts confirmed that the events are being created locally in the hosts in Microsoft-->Windows-->Sysmon tree But no single event appears in the sysmon index Does anyone have any idea or suggestion of what might be missing? many thanks
Hi, I am using Opentelemetry to push Prometheus metrics into Splunk index with metrics data type. After pushing the metrics, the following error log prompts. The metric event is not properly st... See more...
Hi, I am using Opentelemetry to push Prometheus metrics into Splunk index with metrics data type. After pushing the metrics, the following error log prompts. The metric event is not properly structured, source=XXX, sourcetype=XXX, host=XXX, index=XXX. Metric event data without a metric name and properly formatted numerical values are invalid and cannot be indexed. Ensure the input metric data is not malformed, have one or more keys of the form "metric_name:<metric>" (e.g..."metric_name:cpu.idle") with corresponding floating point values. May I know what is causing this issue and how can I solve it, please? Thank you.