All Topics

Top

All Topics

Is anyone using NiFi, StreamSets or Cribl as part of your log delivery pipeline?  My team is trying to build a more robust pipeline.  Before data is sent to Splunk we would love to clean-up and fix a... See more...
Is anyone using NiFi, StreamSets or Cribl as part of your log delivery pipeline?  My team is trying to build a more robust pipeline.  Before data is sent to Splunk we would love to clean-up and fix any data issues before data gets indexed.  Looking for experiences, pros and cons for each tool.  Any experience that could be shared would be really appreciated. Regards, The Frunkster
Hello, When will Splunk Fundamentals for Splunk 8.x be available? Also, can Fundamentals 1 for 7.x fulfill the prerequisite for Fundamentals 2 for 8.x? Thanks  
Hi I configured an archiving policy and I would like to notice when logs are archived. Is there any way to do so? I guess if an archive job is logged as system log, I can detect it in _internal inde... See more...
Hi I configured an archiving policy and I would like to notice when logs are archived. Is there any way to do so? I guess if an archive job is logged as system log, I can detect it in _internal index. Thank you
We have configured the panorama management logs on syslog server correctly. While checking the pan logs on core search head logs are going to catch all index. Please suggest here for correct configur... See more...
We have configured the panorama management logs on syslog server correctly. While checking the pan logs on core search head logs are going to catch all index. Please suggest here for correct configuration to fix the issue.
When searching through certain sour ectypes and indexes, seeing a discrepancy between time and date for event time. Suggestions welcomed on diagnosing this issue. Thanks in advance.  
hi   I generate a csv automatically bu executing the search below in my prod environment index=tutu | stats last(ez) as ez by sam | outputlookup ez.csv what is strange is that when I call the c... See more...
hi   I generate a csv automatically bu executing the search below in my prod environment index=tutu | stats last(ez) as ez by sam | outputlookup ez.csv what is strange is that when I call the csv | inputlookup ez.csv from the prod environment it works fine I have 2 columns with ez and sam But when I call it from dev environment the csv is truncated because ez column is empty and in sam column I have the ez field value and the sam field value! Is anybody has an explanation please?
I have a dashboard , where I want some help in developing  a java script that will perform validation on my field. I have a field named 'url'. I want users to input only values that doesn't have any... See more...
I have a dashboard , where I want some help in developing  a java script that will perform validation on my field. I have a field named 'url'. I want users to input only values that doesn't have any padded spaces in the starting or ending and it shouldn't also contain  <doublequote>". I could use some help in building a Java Script to achieve this. So, far I have created a field ID <input type="text" token="url" id="url_value" searchWhenChanged="true"> <label>URL</label></input> In the Splunk XML I have invoked the .js file <form script="validate the field.js"> I do need some help in writing the javascript that could check for padded spaces in starting or beggining and also to check for " double quote . So that user would avoid inputing this things in the field. require(["jquery", "splunkjs/mvc/simplexml/ready!"], function($) { $("[id^=url_value]") .attr('type','number') ....don't know what next to write here in the .js  
Hello, I was wondering if this is feasible to have a multiselect input in the dashboars, which would allow to save down the chosen values as a user-variant and reuse it next time. Example would be ... See more...
Hello, I was wondering if this is feasible to have a multiselect input in the dashboars, which would allow to save down the chosen values as a user-variant and reuse it next time. Example would be a list of SIDs (system IDs) in a monitoring dashboard, let us say around 20, which the user has to analyse. After choosing them in the first multiselect the user should get the possibility to save the selection under his own variant name, which would be offered next time he starts the dashboard. Has anyone did anything similar? Kind Regards, Kamil  
Hi all, asking for a friend. I have a Juniper SRX380 for my firewall, and I am trying to bring data into Splunk on-prem. On the Juniper side, I configured to send to Splunk using the CLI with these ... See more...
Hi all, asking for a friend. I have a Juniper SRX380 for my firewall, and I am trying to bring data into Splunk on-prem. On the Juniper side, I configured to send to Splunk using the CLI with these commands (below), then committed the configuration: set security log mode stream set security log source-address <SRXip> set security log stream Splunk format sd-syslog set security log stream Splunk host <splunkhostIP> set system syslog host <splunkhostIP> port 1514 One the Splunk side, I configured a UDP listener on port 1514, and gave it the optional "Select from connection", and plopped the SRXip there. I set the source type to be "juniper" from the Juniper-TA. I used wireshark to do a pcap analysis, and noticed that the SRX wasn't communicating with Splunk, I have a hunch that its a Juniper issue, but I'm not a Juniper expert. Problem is that no data is still coming in.  Is there something wrong that I did on either the Juniper side or the Splunk side? Also, I made sure UDP port 1514 was opened. Any trouble shooting tips would be appreciated.
I have props.conf [source::tcp:7660] TRUNCATE=10000000 LINE_BREAKER = {\"time NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Custom pulldown_type = true KV_MODE = json #TZ = Amer... See more...
I have props.conf [source::tcp:7660] TRUNCATE=10000000 LINE_BREAKER = {\"time NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Custom pulldown_type = true KV_MODE = json #TZ = America/Chicago TZ=UTC =====================================   I see some of events are not parsed in json format  
Hello, The documentation says that a stanza [host::<host>] in "props.conf" must be used with a host-pattern Is it a way to use a regexp? I have to match host names like "[vp][mnas][pdtiv].*"
Hi I have a question about using savedsearch vs macros for the amount of jobs that are produced in Splunk. I have a situation where we must run alerts every 1 minutes looking for issues, the SPL wi... See more...
Hi I have a question about using savedsearch vs macros for the amount of jobs that are produced in Splunk. I have a situation where we must run alerts every 1 minutes looking for issues, the SPL will call multiple other saved searches and then can call more saved searches etc... We have noticed that this sends the number of jobs very high in Splunk. The question is if we used macros and not saved searches would the number of jobs decrease?
Hi Team, Nessus Data is missing in Splunk, since nessus scanner storage has been increased. Nothing has been changed from Splunk side.  Kindly find the error from the data and kindly assist on it. ... See more...
Hi Team, Nessus Data is missing in Splunk, since nessus scanner storage has been increased. Nothing has been changed from Splunk side.  Kindly find the error from the data and kindly assist on it. ERROR pid=15132 tid=MainThread file=nessus_rest_client.py:request:91 | Failed to connect https://X.X.X.X:8834/scans/4098, reason=Traceback (most recent call last):   File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/nessus_rest_client.py", line 79, in request     headers=headers)   File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/splunktalib/httplib2/__init__.py", line 1593, in request     (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)   File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/splunktalib/httplib2/__init__.py", line 1335, in _request     (response, content) = self._conn_request(conn, request_uri, method, body, headers)   File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/splunktalib/httplib2/__init__.py", line 1291, in _conn_request     response = conn.getresponse()   File "/opt/splunk/lib/python2.7/httplib.py", line 1148, in getresponse     response.begin()   File "/opt/splunk/lib/python2.7/httplib.py", line 448, in begin     version, status, reason = self._read_status()   File "/opt/splunk/lib/python2.7/httplib.py", line 404, in _read_status     line = self.fp.readline(_MAXLINE + 1)   File "/opt/splunk/lib/python2.7/socket.py", line 480, in readline     data = self._sock.recv(self._rbufsize)   File "/opt/splunk/lib/python2.7/ssl.py", line 754, in recv     return self.read(buflen)   File "/opt/splunk/lib/python2.7/ssl.py", line 641, in read     v = self._sslobj.read(len) SSLError: ('The read operation timed out',)   Thanks, Ashwarya
Hi, My Splunk environment is on-prem. I have a single IDX which runs RHEL on a physical stand-alone server. Indexes are stored on a RAID 5 disk configuration on the same server. My disk is starting... See more...
Hi, My Splunk environment is on-prem. I have a single IDX which runs RHEL on a physical stand-alone server. Indexes are stored on a RAID 5 disk configuration on the same server. My disk is starting to fill up and I was wondering if is is possible to manually delete older warm bucket files on my IDX (basically running an rm linux command)? The indexes in question are not being used/written to any more, so I dont need to search/access them again. Is this OK to do? Will it create any inconsistency issues or errors in my Splunk env? Thanks. 
Hi I have csv file that contain my errorcodes and meaning of them. I import this csv as lookup. the problem is some codes have same meaning and when I get report show them separately e.g Here is t... See more...
Hi I have csv file that contain my errorcodes and meaning of them. I import this csv as lookup. the problem is some codes have same meaning and when I get report show them separately e.g Here is the my csv: code meaning 404    Page not found 402    Page not found   Current output: Code            Meaning                         Count          404            Page not found                  25                      402            Page not found                  25   I need to consider them as one and count them like this: Code                     Meaning                                  TotalCount 404, 402            Page not found                           50   FYI: if meaning are same consider they are same and able to count them   any idea? Thanks
Hello all, I am looking to extract automatically, all key value pairs, from the following event.   Oct 20 12:52:40 11.22.33.44 status: Info: Status: CPULd 0 DskIO 0 RAMUtil 4 QKUsd 0 QKFre 8388608... See more...
Hello all, I am looking to extract automatically, all key value pairs, from the following event.   Oct 20 12:52:40 11.22.33.44 status: Info: Status: CPULd 0 DskIO 0 RAMUtil 4 QKUsd 0 QKFre 8388608 CrtMID 184222 CrtICID 2257015 CrtDCID 83915 InjMsg 126359 InjRcp 133519 GenBncRcp 2695 RejRcp 39342 DrpMsg 6 SftBncEvnt 9183 CmpRcp 139771 HrdBncRcp 3372 DnsHrdBnc 560 5XXHrdBnc 2406 FltrHrdBnc 0 ExpHrdBnc 406 OtrHrdBnc 0 DlvRcp 136399 DelRcp 0 GlbUnsbHt 0 ActvRcp 0 UnatmptRcp 0 AtmptRcp 0 CrtCncIn 0 CrtCncOut 0 DnsReq 5081719 NetReq 2029480 CchHit 5092135 CchMis 1888266 CchEct 891535 CchExp 1239213 CPUTTm 283360 CPUETm 123729299 MaxIO 7211 RAMUsd 182837940 MMLen 0 DstInMem 5 ResCon 0 WorkQ 0 QuarMsgs 0 QuarQKUsd 0 LogUsd 6 SophLd 0 BMLd 0 CASELd 0 TotalLd 53 LogAvail 175G EuQ 0 EuqRls 0 CmrkLd 0 McafLd 0 SwIn 1743004 SwOut 1980200 SwPgIn 8219419 SwPgOut 19669768 RptLd 0 QtnLd 0 EncrQ 0 InjBytes 21864651864   For that, I used the folowing regex: (?<key>\S+)\s(?<value>\d+) However, I can not get a 100% match, because I only want this to be valid for all the data after "Status:" (in bold, in the example).   Can anyone help me get a regex for this key value pair extraction? Thanks in advance.
INFO | 2021-10-18 05:17 AM | BUSINESS RULE | Payload for ID#: 58916 with status Approved is published Second Event msg:  INFO | 2021-10-14 10:38 PM |  Message consumed: {"InputAmountToCredit":"22.6... See more...
INFO | 2021-10-18 05:17 AM | BUSINESS RULE | Payload for ID#: 58916 with status Approved is published Second Event msg:  INFO | 2021-10-14 10:38 PM |  Message consumed: {"InputAmountToCredit":"22.67","CurrencyCode":"AUD","Buid":"1401","OrderNumber":"877118406","Cre ID":"58916"}   I want to have sum of InputAmountToCredit based on status . status can vary to different statuses and ID is common field for both the events (but it differs in key in both the events)  How can I extact fields for status InputAmountToCredit and ID. I want to sum amount for each Id who is having same status and generate status wise report.   Also my Searches are slow and I want to implement summary index. 1. I am not sure if tomorrow i want to update my summary index will I be able to do so with new events? 2. How can I decide what should be part of summary index as right now I have the requirement with these two events only. I am new to splunk so any pointers will help   Thanks for all the support.
I am struggling to understand data retention for Smartstore index. I guess I am thinking more from a non-smartstore way of data retention.   Below is my requirement Data ingestion per day - 1.5 TB... See more...
I am struggling to understand data retention for Smartstore index. I guess I am thinking more from a non-smartstore way of data retention.   Below is my requirement Data ingestion per day - 1.5 TB per day Total retention - 90 days Local storage - 50 days Smartstore storage - 40 days Multi-site indexer cluster with 2 indexers in each AWS AZ with RF - 3 and SF - 3 From the docs, I understand following parameters are key for retention     maxGlobalDataSizeMB maxGlobalRawDataSizeMB frozenTimePeriodInSecs     Since both maxGlobal* parameters apply to mainly warm buckets, how can I control data retention for each index as a total and not specifically to warm buckets, similar to maxTotalDataSizeMb ? With non-smartstore index, this was so easy, you either just needed maxTotalDataSizeMb and frozenTimePeriodInSecs Also, how can I control size of each index occupies on local storage (which is basically ability to configure something similar to homePath.maxDataSizeMB, which is again ignored for smartstore indexes)?   I am surprised all this is not documented clearly anywhere in official Splunk docs but some info can be found in forums. How can I translate my requirements into these parameters on a per index basis ? Please help  
Hi,  I have data with field names in the format: h00m00 h00m15 h00m30 h00m45 h01m00  .. thru h23m45 I'd like to pull the 'h' value from a drop down and use that in a chart command , so eval hh=sub... See more...
Hi,  I have data with field names in the format: h00m00 h00m15 h00m30 h00m45 h01m00  .. thru h23m45 I'd like to pull the 'h' value from a drop down and use that in a chart command , so eval hh=substr('04:00:00',1,2) from the current token in the drop down should give me 04. Then somehow chart just the h04* fields so I can focus on a specific hour as below where I hardcoded the  h04* | chart sum(h04*) by edate if anyone has any ideas I'd be grateful.
hello, I have list of 20 server IP, I'm not administrator of Splunk, I need to find look match where source or destination ip base on this list of servers IP. thank you.