All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Please help me fix this SPL to produce the license usage listed above. Thx a million This is not working for me: index="_internal" | stats sum(GB) as A by Date, idx | eventstats max(A) as B by id... See more...
Please help me fix this SPL to produce the license usage listed above. Thx a million This is not working for me: index="_internal" | stats sum(GB) as A by Date, idx | eventstats max(A) as B by idx | where A=B | dedup A idx | sort idx | table Date,A idx  
I'm using tstats on an accelerated data model which is built off of a summary index. Everything works as expected when querying both the summary index and data model except for an exceptionally large... See more...
I'm using tstats on an accelerated data model which is built off of a summary index. Everything works as expected when querying both the summary index and data model except for an exceptionally large environment that produces 10-100x more results when running dc().   This works fine in said environment and produces 17,000,000~:   | tstats summariesonly=true count(assets.hostname) from datamodel="Summary_Host_Data" where (earliest=-1d latest=now)   This produces 0 results, which should be around 400,000~:   | tstats summariesonly=true dc(assets.hostname) from datamodel="Summary_Host_Data" where (earliest=-1d latest=now)   Even though the summary index works fine and produces 400,000~:   index=summary_host_data earliest=-1d | stats dc(hostname)   Finally, if I search over 6 hours instead of 1d, I do get results from the tstats using dc(). Is there some type of limit I'm running into with dc()? Or is there something else going on?
I just set up a heavy forwarder with the AWS add-on.  I launched the app and went to the configuration page but all I get is the spinning 'Loading' icon. I looked over the previous question like thi... See more...
I just set up a heavy forwarder with the AWS add-on.  I launched the app and went to the configuration page but all I get is the spinning 'Loading' icon. I looked over the previous question like this: https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-Add-on-for-AWS-Hangs/m-p/316801   But I'm not using any Google services at all nor do I have any file "/etc/boto.cfg ".   When I look through the Splunkd logs I see this:   +0000 ERROR AdminManagerExternal [1021 TcpChannelThread] - Stack trace from python handler: Traceback (most recent call last): File "/home/y/var/splunk/lib/python3.7/site-packages/splunk/admin.py", line 114, in init_persistent hand.execute(info) File "/home/y/var/splunk/lib/python3.7/site-packages/splunk/admin.py", line 637, in execute if self.requestedAction == ACTION_LIST: self.handleList(confInfo) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_rh_settings.py", line 58, in handleList entity = client.Entity(self._service, uri % (service, LOGGING_ENDPOINTS[service])) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/client.py", line 900, in __init__ self.refresh(kwargs.get('state', None)) # "Prefresh" File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/client.py", line 1039, in refresh self._state = self.read(self.get()) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/client.py", line 1009, in get return super(Entity, self).get(path_segment, owner=owner, app=app, sharing=sharing, **query) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/client.py", line 766, in get **query) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 290, in wrapper return request_fun(self, *args, **kwargs) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 71, in new_f val = f(*args, **kwargs) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 680, in get response = self.http.get(path, all_headers, **query) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 1184, in get return self.request(url, { 'method': "GET", 'headers': headers }) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 1242, in request response = self.handler(url, message, **kwargs) File "/home/y/var/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 1383, in request connection.request(method, path, body, head) File "/home/y/var/splunk/lib/python3.7/http/client.py", line 1277, in request self._send_request(method, url, body, headers, encode_chunked) File "/home/y/var/splunk/lib/python3.7/http/client.py", line 1323, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/home/y/var/splunk/lib/python3.7/http/client.py", line 1272, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/home/y/var/splunk/lib/python3.7/http/client.py", line 1032, in _send_output self.send(msg) File "/home/y/var/splunk/lib/python3.7/http/client.py", line 972, in send self.connect() File "/home/y/var/splunk/lib/python3.7/http/client.py", line 1439, in connect super().connect() File "/home/y/var/splunk/lib/python3.7/http/client.py", line 944, in connect (self.host,self.port), self.timeout, self.source_address) File "/home/y/var/splunk/lib/python3.7/socket.py", line 728, in create_connection raise err File "/home/y/var/splunk/lib/python3.7/socket.py", line 716, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused Which looks like it's trying to autoconfigure itself like mentioned in:https://docs.splunk.com/Documentation/AddOns/released/AWS/Setuptheadd-on#Find_an_IAM_role_within_your_Splunk_platform_instance   But this is not running in AWS so I planned to manually configure it.  However I cannot because of the above problem.  How do I fix this?   Splunk-8.2.2 Splunk AWS add-on 5.2.0
The case at https://community.splunk.com/t5/Getting-Data-In/Issue-on-file-monitoring-using-forwader/m-p/478063#M82045 is similar. When files are being ftp'ed to the location we see in _internal erro... See more...
The case at https://community.splunk.com/t5/Getting-Data-In/Issue-on-file-monitoring-using-forwader/m-p/478063#M82045 is similar. When files are being ftp'ed to the location we see in _internal errors that the file can't be read. Comes the weekend and this host is being rebooted and the files are being ingested. We looked at MonitorNoHandle that allows reading while the file is being written on Windows but MonitorNoHandle only allows one such file per stanza. We asked the customer to ftp the files to another directory and move them later via a script but the customer wasn't thrilled about this idea. We also thought that maybe there is a way to have the UF check for new files multiple times before putting them in the black list and it doesn't seem to be possible. What can we do?      
I have a field named failcode with numerous fail code names structured like this: date failcode count 2021-10-01 g-ab 123 2021-10-01 g-bc 258 2021-10-01 g-cd 369 2021-10-01 c... See more...
I have a field named failcode with numerous fail code names structured like this: date failcode count 2021-10-01 g-ab 123 2021-10-01 g-bc 258 2021-10-01 g-cd 369 2021-10-01 c-ab 456 2021-10-01 c-bc 124 2021-10-01 c-cd 325 2021-10-01 d-ab 854 2021-10-01 d-bc 962 2021-10-01 d-cd 362 2021-10-01 d-dd 851 2021-10-02 g-ab 963 2021-10-02 g-bc 101 2021-10-02 g-cd 171 2021-10-02 c-ab 320 2021-10-02 c-bc 214 2021-10-02 c-cd 985 2021-10-02 d-ab 165 2021-10-02 d-bc 130 2021-10-02 d-cd 892 2021-10-02 d-dd 964 2021-10-03 g-ab 653 2021-10-03 g-bc 285 2021-10-03 g-cd 634 2021-10-03 c-ab 689 2021-10-03 c-bc 752 2021-10-03 c-cd 452 2021-10-03 d-ab 365 2021-10-03 d-bc 125 2021-10-03 d-cd 691 2021-10-03 d-dd 354   I want to only keep certain codes: g-ab, c-cd, and d-dd and not display the rest in my results. Essentially I just want to display certain results from my failcode column. 
I have a field called alphabet that stores multiple values. I want to create a search that only returns events that has only one of those values. In this example I want to return all events where onl... See more...
I have a field called alphabet that stores multiple values. I want to create a search that only returns events that has only one of those values. In this example I want to return all events where only field = A exists Event examples 'index = test WHERE alphabet=a' does not work as it returns all events where 'a' exists including with other values. For the three events below I would like a search that only returns event 3. How can I build a search that does this? Event 1  alphabet = a,b,c,d   Event 2 alphabet = a,b   Event 3 alphabet = a    
Does anyone know if there is a native k8s installation package available for splunk-connect-for-snmp?   The installation documentation I find all references microk8s using Ubuntu. Thanks, jb  
I trying to implement Splunk across multiple domains. Due to company policy some domains don't have access to internet and as a result servers in those domain cannot communicate with indexers located... See more...
I trying to implement Splunk across multiple domains. Due to company policy some domains don't have access to internet and as a result servers in those domain cannot communicate with indexers located in Splunk cloud.  What is the best way to send data to indexer in this case? One of the suggestions we received is to use heavy forwarder as intermediate. But it will introduce single point of failure so we might need to implement load balancer. Is their any other way possible?
Hello, In our environment we have Splunk HF with 2 parallel Ingestion Pipelines. https://docs.splunk.com/Documentation/Splunk/8.2.2/Capacity/Parallelization#Index_parallelization One of the aim of... See more...
Hello, In our environment we have Splunk HF with 2 parallel Ingestion Pipelines. https://docs.splunk.com/Documentation/Splunk/8.2.2/Capacity/Parallelization#Index_parallelization One of the aim of those Splunk HF is to offload the Splunk Indexer on parsing Pipeline, Merging Pipeline and Typing Pipeline. Due to that the data coming from Splunk HF are already "processed" and our Indexer are mostly processing them only in the Index Pipeline. https://wiki.splunk.com/Community:HowIndexingWorks On the Indexers we only have 1 Ingestion Pipeline, the CPU Cores used for indexing are typically 4-6. Does our Indexers are taking advantage using pretty much all the 4-6 CPU Cores for the Index Pipeline only OR they are "wasted" on the other mostly idle pipelines? Thanks a lot, Edoardo
Hi all, I have a simple question in mind. I know that the communication between splunk instances is partly per default encrypted and the cluster communication is not. At least not per default.  Is... See more...
Hi all, I have a simple question in mind. I know that the communication between splunk instances is partly per default encrypted and the cluster communication is not. At least not per default.  Is there a splunk recommendation to leave it unencrypted? Are there existing recommendations for intra Index Cluster and intra Search Head Cluster communication? What options are there actually? Kind regards, O.
Is anyone using NiFi, StreamSets or Cribl as part of your log delivery pipeline?  My team is trying to build a more robust pipeline.  Before data is sent to Splunk we would love to clean-up and fix a... See more...
Is anyone using NiFi, StreamSets or Cribl as part of your log delivery pipeline?  My team is trying to build a more robust pipeline.  Before data is sent to Splunk we would love to clean-up and fix any data issues before data gets indexed.  Looking for experiences, pros and cons for each tool.  Any experience that could be shared would be really appreciated. Regards, The Frunkster
Hi I configured an archiving policy and I would like to notice when logs are archived. Is there any way to do so? I guess if an archive job is logged as system log, I can detect it in _internal inde... See more...
Hi I configured an archiving policy and I would like to notice when logs are archived. Is there any way to do so? I guess if an archive job is logged as system log, I can detect it in _internal index. Thank you
We have configured the panorama management logs on syslog server correctly. While checking the pan logs on core search head logs are going to catch all index. Please suggest here for correct configur... See more...
We have configured the panorama management logs on syslog server correctly. While checking the pan logs on core search head logs are going to catch all index. Please suggest here for correct configuration to fix the issue.
When searching through certain sour ectypes and indexes, seeing a discrepancy between time and date for event time. Suggestions welcomed on diagnosing this issue. Thanks in advance.  
hi   I generate a csv automatically bu executing the search below in my prod environment index=tutu | stats last(ez) as ez by sam | outputlookup ez.csv what is strange is that when I call the c... See more...
hi   I generate a csv automatically bu executing the search below in my prod environment index=tutu | stats last(ez) as ez by sam | outputlookup ez.csv what is strange is that when I call the csv | inputlookup ez.csv from the prod environment it works fine I have 2 columns with ez and sam But when I call it from dev environment the csv is truncated because ez column is empty and in sam column I have the ez field value and the sam field value! Is anybody has an explanation please?
I have a dashboard , where I want some help in developing  a java script that will perform validation on my field. I have a field named 'url'. I want users to input only values that doesn't have any... See more...
I have a dashboard , where I want some help in developing  a java script that will perform validation on my field. I have a field named 'url'. I want users to input only values that doesn't have any padded spaces in the starting or ending and it shouldn't also contain  <doublequote>". I could use some help in building a Java Script to achieve this. So, far I have created a field ID <input type="text" token="url" id="url_value" searchWhenChanged="true"> <label>URL</label></input> In the Splunk XML I have invoked the .js file <form script="validate the field.js"> I do need some help in writing the javascript that could check for padded spaces in starting or beggining and also to check for " double quote . So that user would avoid inputing this things in the field. require(["jquery", "splunkjs/mvc/simplexml/ready!"], function($) { $("[id^=url_value]") .attr('type','number') ....don't know what next to write here in the .js  
Hello, I was wondering if this is feasible to have a multiselect input in the dashboars, which would allow to save down the chosen values as a user-variant and reuse it next time. Example would be ... See more...
Hello, I was wondering if this is feasible to have a multiselect input in the dashboars, which would allow to save down the chosen values as a user-variant and reuse it next time. Example would be a list of SIDs (system IDs) in a monitoring dashboard, let us say around 20, which the user has to analyse. After choosing them in the first multiselect the user should get the possibility to save the selection under his own variant name, which would be offered next time he starts the dashboard. Has anyone did anything similar? Kind Regards, Kamil  
Hi all, asking for a friend. I have a Juniper SRX380 for my firewall, and I am trying to bring data into Splunk on-prem. On the Juniper side, I configured to send to Splunk using the CLI with these ... See more...
Hi all, asking for a friend. I have a Juniper SRX380 for my firewall, and I am trying to bring data into Splunk on-prem. On the Juniper side, I configured to send to Splunk using the CLI with these commands (below), then committed the configuration: set security log mode stream set security log source-address <SRXip> set security log stream Splunk format sd-syslog set security log stream Splunk host <splunkhostIP> set system syslog host <splunkhostIP> port 1514 One the Splunk side, I configured a UDP listener on port 1514, and gave it the optional "Select from connection", and plopped the SRXip there. I set the source type to be "juniper" from the Juniper-TA. I used wireshark to do a pcap analysis, and noticed that the SRX wasn't communicating with Splunk, I have a hunch that its a Juniper issue, but I'm not a Juniper expert. Problem is that no data is still coming in.  Is there something wrong that I did on either the Juniper side or the Splunk side? Also, I made sure UDP port 1514 was opened. Any trouble shooting tips would be appreciated.
I have props.conf [source::tcp:7660] TRUNCATE=10000000 LINE_BREAKER = {\"time NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Custom pulldown_type = true KV_MODE = json #TZ = Amer... See more...
I have props.conf [source::tcp:7660] TRUNCATE=10000000 LINE_BREAKER = {\"time NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Custom pulldown_type = true KV_MODE = json #TZ = America/Chicago TZ=UTC =====================================   I see some of events are not parsed in json format  
Hello, The documentation says that a stanza [host::<host>] in "props.conf" must be used with a host-pattern Is it a way to use a regexp? I have to match host names like "[vp][mnas][pdtiv].*"