All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have data that looks like the following: Week               Employee        Project# 6/3/2022         A                      001 6/3/2022         A                      002 6/10/2022       ... See more...
I have data that looks like the following: Week               Employee        Project# 6/3/2022         A                      001 6/3/2022         A                      002 6/10/2022       A                      002 6/10/2022       B                      002 6/17/2022       A                      003 6/17/2022       B                      001 6/17/2022       B                      002 6/24/2022       B                      001 I would like to get a count of the total of the number of distinct weeks that employees appear in the data regardless of how many projects they have an entry for .  So, for the above the count should be 6 as below: 6/3/2022 > Employee A > Count=1 6/10/2022 > Employee A and B > Count=2 6/17/2022 > Employee A and B > Count=2 6/24/2022 > Employee B > Count=1 Is there some way I can use multiple fields in Distinct Count to accomplish this?  
Hey all, I'm trying to pull in the Syslog or our Meraki MX to our on-premise Splunk Enterprise in order to monitor internal port scanning. Right now I have the Syslogs coming in via the Data input ... See more...
Hey all, I'm trying to pull in the Syslog or our Meraki MX to our on-premise Splunk Enterprise in order to monitor internal port scanning. Right now I have the Syslogs coming in via the Data input > UDP (514). I see all the data being pulled in correctly however when I search internal traffic communication it shows everything going to the broadcast IP. I'm not sure if I should be using a different method, but I would appreciate some guidance on best practices to monitor internet traffic. Thanks!
How to create a 14 day search for specific time range (02:00 - 06:00) only?    
I have a lookup table with only one field, named host. The table contains a list of hostnames.  I'm trying to find a way to get a count of events by host using this lookup table as the input (i.e. ... See more...
I have a lookup table with only one field, named host. The table contains a list of hostnames.  I'm trying to find a way to get a count of events by host using this lookup table as the input (i.e. the hosts I want a count for).  I've tried a variety of approaches. For example:      |inputlookup file.csv | stats count by host     Every host returns a count of 1.      |inputlookup file.csv | join type=left host [|tstats count by host]     About a dozen hosts return counts; the rest return null values.  Complicating this problem seems to be case. If I crunch all the hosts to upper or lowercase, I get different results, but neither returns a complete result set. That seems super odd given that field values aren't case sensitive. I've tried crunching case with eval as well as in the lookup table itself, to no avail.  We're stumped. What is the best approach to use a lookup table of hostnames to get an event count by host?
I have the following row in a CSV file that I am ingesting into a Splunk index: "field1","field2","field3\","field4" Excel and the default Python CSV reader both correctly parse that as 4 separate ... See more...
I have the following row in a CSV file that I am ingesting into a Splunk index: "field1","field2","field3\","field4" Excel and the default Python CSV reader both correctly parse that as 4 separate fields. Splunk does not. It seems to be treating the backslash as an escape character and interpreting field3","field4 as a single mangled field. It is my understanding that the standard escape character for double quotes inside a quoted CSV field is another double quote, according to RFC-4180: "If double-quotes are used to enclose fields, then a double-quote appearing inside a field must be escaped by preceding it with another double quote." Why is Splunk treating the backslash as an escape character, and is there any way to change that configuration via props.conf or any other way? I have set: INDEXED_EXTRACTIONS = csv KV_MODE = none for this sourcetype in props.conf, and it is working fine for rows without backslashes in them.
Can the DGA App for Splunk be installed in Splunk Cloud?
I've imported a .csv that has many fields, but the only one I care about has multiple values in it.  pluginText: <plugin_output> Computer Manufacturer : VMware, Inc. Computer Model : VMware V... See more...
I've imported a .csv that has many fields, but the only one I care about has multiple values in it.  pluginText: <plugin_output> Computer Manufacturer : VMware, Inc. Computer Model : VMware Virtual Platform Computer SerialNumber : This is what I REALLY need Computer Type : Other Computer "ect".. </plugin_output> I've tried extracting, and filtering, I believe Regex may work, but that is where I'm at.  
I'm trying to run a query to figure out the top 10 src_ip's along with their top 10 urls visited. When I try the below query it's giving me every src_ip instead of just the top 10. Any suggestions ... See more...
I'm trying to run a query to figure out the top 10 src_ip's along with their top 10 urls visited. When I try the below query it's giving me every src_ip instead of just the top 10. Any suggestions on how to limit the search for just the top 10 src_ip by top 10 url? I've been running something like this: index=firewall | stats count by src_ip, url | sort 0 src_ip -count | streamstats count as standings by src_ip | where standings < 11 | eventstats sum(count) as total by category | sort 0 -total src_ip -count
Is there a way to send all matching notable events to a custom index with very vague fields (due to confidentiality reasons)? I would like to send event data to a new index that basically says "You... See more...
Is there a way to send all matching notable events to a custom index with very vague fields (due to confidentiality reasons)? I would like to send event data to a new index that basically says "You have a new alert" so that I can integrate it with an XSOAR solution without disclosing any confidential information. This is due to the way the ingestion script is written - anyone can modify the query to pull information from the logs.  The intention is to notify analysts that an alert is present without (potentially) exposing this information to unauthorized individuals.
Hi everyone, I need your help, I have this error because the kvstore stays in starting status. and it sends me the following error. Validate the certificate and it is valid, I also already gave pe... See more...
Hi everyone, I need your help, I have this error because the kvstore stays in starting status. and it sends me the following error. Validate the certificate and it is valid, I also already gave permissions to the splunk.key file and even so it does not remain. 07-18-2022 12:21:04.601 -0500 ERROR KVStoreBulletinBoardManager [12334 MongodLogThread] - KV Store process terminated abnormally (exit code 14, status PID 12335 exited with code 14). See mongod.log and splunkd.log for details. 18/07/2022 12:21:04.507 07-18-2022 12:21:04.507 -0500 ERROR MongodRunner [12334 MongodLogThread] - mongod exited abnormally (exit code 14, status: PID 12335 exited with code 14) - look at mongod.log to investigate. kvStoreStatus --------------------------- -------------  starting  failed  failed  failed failed failed
Is there a way to show currency symbol after the value? Like $393.26
I saw that there are two options to send logs from universal forwarder to indexer. We can use [httpout] to send the logs to HTTP Event Collector on the indexer on port 8088; alternatively, we can u... See more...
I saw that there are two options to send logs from universal forwarder to indexer. We can use [httpout] to send the logs to HTTP Event Collector on the indexer on port 8088; alternatively, we can use [tcpout] to send the logs to the indexer on port 9997.   Which one is the best practice to implement? Thanks.
I have a  file which gives me the following output :-  srvrmgr> list comps SHOW SV_NAME,CP_DISP_RUN_STATE,CP_STARTMODE,CP_NUM_RUN_TASKS,CP_MAX_TASKS,CP_ACTV_MTS_PROCS,CP_MAX_MTS_PROCS,CP_START_TIME... See more...
I have a  file which gives me the following output :-  srvrmgr> list comps SHOW SV_NAME,CP_DISP_RUN_STATE,CP_STARTMODE,CP_NUM_RUN_TASKS,CP_MAX_TASKS,CP_ACTV_MTS_PROCS,CP_MAX_MTS_PROCS,CP_START_TIME,CP_END_TIME,CC_ALIAS SV_NAME CP_DISP_RUN_STATE CP_STARTMODE CP_NUM_RUN_TASKS CP_MAX_TASKS CP_ACTV_MTS_PROCS CP_MAX_MTS_PROCS CP_START_TIME CP_END_TIME CC_ALIAS -------- ------------------- ------------ ---------------- ------------ ----------------- ---------------- ------------------- ------------------- --------------- --------- lnx001 Online Auto 0 50 1 1 2022-07-18 03:53:03 comp_123 comp_123 lnx003 Online Auto 0 50 1 1 2022-07-18 03:53:03 comp_456 comp_123 lnx005 Online Auto 0 20 1 1 2022-07-18 03:53:03 comp_123 comp_123 lnx007 Online Manual 0 50 0 1 2022-07-18 03:53:03 comp_987 lnx010 Online Manual 0 500 0 5 2022-07-18 03:53:03 comp_564 lnx011 Online Auto 643 4000 40 40 2022-07-18 03:53:03 comp_123   I only want to extract the 1st,4th(numeric) and where comp_name=comp_123, discarding all the other entries and show 1st field as host, 4th field as runningtasks and final field as component.. Please help me with the filters
Hi, how can I modify x-axis in order to display date only for each column.  query | eval finish_time_epoch = strftime(strptime(FINISH_TIME, "%Y-%m-%d %H:%M:%S"),"%Y-%m-%d %H:%M:%S") | eval... See more...
Hi, how can I modify x-axis in order to display date only for each column.  query | eval finish_time_epoch = strftime(strptime(FINISH_TIME, "%Y-%m-%d %H:%M:%S"),"%Y-%m-%d %H:%M:%S") | eval start_time_epoch = strftime(strptime(START_TIME, "%Y-%m-%d %H:%M:%S"),"%Y-%m-%d %H:%M:%S") | eval duration_s = strptime(FINISH_TIME, "%Y-%m-%d %H:%M:%S") - strptime(START_TIME, "%Y-%m-%d %H:%M:%S") | eval duration_min = round(duration_s / 60, 2) | chart sum(duration_min) as "time" by TimeDate  
Hello, we are trying to get data from Netapp server. Trying to understand the process on onboarding the data. I have already installed add-on available on Splunk base. (One main add-on with 3 oth... See more...
Hello, we are trying to get data from Netapp server. Trying to understand the process on onboarding the data. I have already installed add-on available on Splunk base. (One main add-on with 3 other supporting add-ons) https://splunkbase.splunk.com/app/5396/ https://splunkbase.splunk.com/app/5616/ https://splunkbase.splunk.com/app/3418/ https://splunkbase.splunk.com/app/5615/ My question is do I need to install Splunk forwarder on Netapp server Or its API based data collection ? In attached screenshot there are 2 option. I configured "ONTAP Collection Configuration".  but there is option of "Add data collection node" which require forwarder details.   Thanks Ankit
I have two actions linked together. The first one is a block with custom code where I want to list all of the files inside directory using  `os.listdirs()`  The second one is decision block.  I wo... See more...
I have two actions linked together. The first one is a block with custom code where I want to list all of the files inside directory using  `os.listdirs()`  The second one is decision block.  I would like to be able to pass the result of the first block into the second.  How can I go about it?
We are testing federated search.  when on the provider (environment A), the fields are nicely extracted. When on the federated SH (environment B), the fields do not show up, however they are usab... See more...
We are testing federated search.  when on the provider (environment A), the fields are nicely extracted. When on the federated SH (environment B), the fields do not show up, however they are usable in SPL for filtering etc... . anyone have an idea if this is a bug/restriction. I have found here restrictions on transparent-mode FS, but we are entirely on prem (8.2.1) so we are using standard mode. I am not conflicting any of the below restrictions documented, just simple search (and data does return) index=federated:<source_index> sourcetype=<sourcetype> extract: Restrictions for standard mode federated search Standard mode federated search does not support the following: Generating commands other than search, from, or tstats. For example, federated searches cannot include the datamodel or inputlookup commands. You can find a list of generating commands in Command types, in the Search Reference. The from command can reference only datasets of the saved search and data model dataset types. Real-time search. Usage of wildcard symbols (*) to reference multiple federated indexes. Metrics indexes and related metrics-specific search commands, such as mpreview or mstats. If you must include metrics data in a federated search, consider mapping a federated index to a saved search dataset that contains metric data. See Create a federated index. in the section on transparent mode there is something about search-time field extraction which then self-references as a page.   any ideas ?    
Hi Team, I am creating authorization token from Splunk web and I received the token which consist of more than 256 characters.  Is it possible to reduce its length or any alternative approach by ... See more...
Hi Team, I am creating authorization token from Splunk web and I received the token which consist of more than 256 characters.  Is it possible to reduce its length or any alternative approach by which we can generate the token up to 256 characters! because my tool have limitation that it cannot accept more than 256 characters as token Thanks, Venkata
Greetings! how to deploy fortimail add-ons in splunk enterprise distributed environment?   As per now ,I was already downloaded the apps and uploaded in splunk search head , what next to do? ... See more...
Greetings! how to deploy fortimail add-ons in splunk enterprise distributed environment?   As per now ,I was already downloaded the apps and uploaded in splunk search head , what next to do? -In my env. I have 7servers( 1 SH, 1 splunk management node, 5 indexers),kindly guide me what next to do? Thank you in advance!
Hello,   We are using Splunk HEC token to receive the EKS logs in Splunk. The EKS monitoring container of Splunk have docker.io/splunk/fluentd-hec:1.2.5 version running. We want to upgrade Splunk... See more...
Hello,   We are using Splunk HEC token to receive the EKS logs in Splunk. The EKS monitoring container of Splunk have docker.io/splunk/fluentd-hec:1.2.5 version running. We want to upgrade Splunk to version 8.2.7 from Splunk 8.0.5 and want to know if docker.io/splunk/fluentd-hec:1.2.5 version of container is compatible with our Splunk version 8.2.7 or not?