All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have the following conf file configs to drop any event that contains the verbiage, "Allow all zones to query DNS anycast address", in Palo logs: props.conf [pan:traffic] TRANSFORMS-paloanycast = p... See more...
I have the following conf file configs to drop any event that contains the verbiage, "Allow all zones to query DNS anycast address", in Palo logs: props.conf [pan:traffic] TRANSFORMS-paloanycast = paloruledrop transforms.conf [paloruledrop] REGEX = Allow all zones to query DNS anycast address DEST_KEY = queue FORMAT = nullQueue Snippet of event     Nov 20 07:11:00 firewall,2021/11/20 07:10:59,010701010006,TRAFFIC,end,2305,2021/11/20 07:10:59,x.x.x.x,y.y.y.y,0.0.0.0,0.0.0.0,Allow all zones to query DNS anycast address,,,dns,vsys7,vrf,xxx-xx-x,xxx.xxx,xxx.xxx,data-data-data,2021/11/20...     Applied the  changes to my cluster master and pushed to the indexers (verified the indexers got the config), but these "Allow all zones to query DNS anycast address" events are not being dropped. I verified the regex on regex101.com Any help would be appreciated Thx
Hi I  need to find 5 "Errors" peak points by server and sort by date   here is my spl: index="myindex" err* | rex field=source "\/data\/(?<product>\w+)\/(?<date>\d+)\/(?<servername>\w+)" | event... See more...
Hi I  need to find 5 "Errors" peak points by server and sort by date   here is my spl: index="myindex" err* | rex field=source "\/data\/(?<product>\w+)\/(?<date>\d+)\/(?<servername>\w+)" | eventstats count as Errors by servername     expected output: servername                               Time                                                           peak points Errors count server1                                       2021-11-19 02:00:00,000          500                                                             2021-11-19 10:00:00,000         450                                                           2021-11-19 18:00:00,000        300                                                           2021-11-19 20:00:00,000        800                                                           2021-11-19 23:00:00,000         9000   server2                                       2021-11-19 01:00:00,000         250                                                           2021-11-19 03:00:00,000       480                                                           2021-11-19 08:00:00,000        30000                                                           2021-11-19 09:00:00,000        463                                                           2021-11-19 10:00:00,000      100  
Hi  I got this error when I search on specific index. index="myindex" Error in 'IndexScopedSearch': The search failed. More than 1000000 events found at time 1637181000.   1-what is the reason ... See more...
Hi  I got this error when I search on specific index. index="myindex" Error in 'IndexScopedSearch': The search failed. More than 1000000 events found at time 1637181000.   1-what is the reason of this error? 2-how can I resolve this issue?   Any idea? Thanks,      
Hello, I have a situation where there are 25 different groups  of people and each group of people uses different sets of data (i.e. there are 25 different sets of data, one for each group). We need ... See more...
Hello, I have a situation where there are 25 different groups  of people and each group of people uses different sets of data (i.e. there are 25 different sets of data, one for each group). We need to create a single dashboard and also need to assign access role in a way that each group can only have access to their data set using the same dashboard.   How I would assign the access role for those 25 different groups.  Should I create 25 different indexes - one for  each group  and assign access role to those indexes? Is it possible to create 25 different sourcetypes and assign access role to those sourcetypes  - one for each group and keep all sourcetypes under one index? Or are there any other ways? Thank you so  much, any help will be highly appreciated.     
We have specific ID's that track how request process through the system. What I want to do search for all these ID's in one index and then join all other index's to see how these funnel down. So esse... See more...
We have specific ID's that track how request process through the system. What I want to do search for all these ID's in one index and then join all other index's to see how these funnel down. So essentially track how a request tracks through our system, and then get all the service requests associated. I was thinking something like:    index=foo |fields service, ID | join ID [index=*] | stats service  
I am trying to correlate 2 different logs one is in EST and the is in UTC.  The UTC logs, I have tried to specific the time zone as UTC in props and then let my user timezone preference do the conve... See more...
I am trying to correlate 2 different logs one is in EST and the is in UTC.  The UTC logs, I have tried to specific the time zone as UTC in props and then let my user timezone preference do the conversion. Props was only updated on the search head.  [soucretype] TZ = UTC This does not seem to work and i have read different post saying that updating the props file should work but not for historical events.  Is there a way to do this indexed events? 
I have a log sample:  | LRU Config Message from RMQ: {"endpoint":"lru/ config", "data":{"timestamp":1637322539.953,"version":"1","aircraftTailId":"N123JB",[{"lruComponent":"Modem","lruHwPartNumber... See more...
I have a log sample:  | LRU Config Message from RMQ: {"endpoint":"lru/ config", "data":{"timestamp":1637322539.953,"version":"1","aircraftTailId":"N123JB",[{"lruComponent":"Modem","lruHwPartNumber":"123456","lruHwSerialNumber":"C82821190191","lruRevisionNumber":"004","lruMacAddress":true}, {"lruComponent":"Server","lruHwPartNumber":"1244632","lruHwSerialNumber":"F39718480040","lruRevisionNumber":"004","lruMacAddress":null}, What i want to do is extract the date and create a table based on the color i highlighted above.  lruComponent  |  lruHwPartNumber |  Modem                   123456
Hey! We upgraded to Splunk Enterprise Security to the latest version a few weeks ago. Before, it was on Version 4.x I believe. It was detecting events before we upgraded and after the upgrade, no m... See more...
Hey! We upgraded to Splunk Enterprise Security to the latest version a few weeks ago. Before, it was on Version 4.x I believe. It was detecting events before we upgraded and after the upgrade, no more events.   Some of the events that were being triggered were: Brute Force Access Behavior Detected, Geographically Improbable Access Detected, and Threat Activity Detected.   Where can I go to find out why I am not detecting these items after the upgrade? The objects for it are enabled in the Content Management Screen. I will copy and paste what it says below: Statistics Avg. Event Count 0 Avg. Result Count 0 Avg. Run Time 0:00:01 Invocations 24 Skipped 0 Success 24 Update Time Nov 19, 2021 1:00:00 PM The data models have a green checkmark and so do the lookups.    
We stopped receiving data from tenable a few days ago. When I went to investigate I could find nothing that changed. But now we cannot add/edit our tenable accounts without getting "No Tenable.sc Ins... See more...
We stopped receiving data from tenable a few days ago. When I went to investigate I could find nothing that changed. But now we cannot add/edit our tenable accounts without getting "No Tenable.sc Instance at <fqdn:443>". Things I was able to do: Log into tenable with the credentials just fine. Perform a test-netconnection <FQDN> -Port 443 nslookup was good able to ping Things I tried but failed: Use the FQDN and IP address. The app is installed on our Heavy Forwarder which has the Search Head and KVStore roles. Splunk Enterprise versions is 8.2.2.1, Windows server 2019. We were running TA-tenable version 5.0.1 and it wasnt working. I upgraded to TA-tenable 5.2.1 and got the same error. From our logs:     python.log 11-19-2021 07:20:52.558 -0800 ERROR AdminManagerExternal [8132 TcpChannelThread] - Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [400]: Bad Request -- No Tenable.sc Instance at <fqdn>". See splunkd.log/python.log for more details. splunkd.log 11-19-2021 07:20:52.558 -0800 ERROR AdminManagerExternal [18606 TcpChannelThread] - Stack trace from python handler:\n Traceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 151, in init\n hand.execute(info)\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 636, in execute\n if self.requestedAction == ACTION_CREATE: self.handleCreate(confInfo)\n File "/opt/splunk/etc/apps/TA-tenable/bin/ta_tenable/splunktaucclib/rest_handler/admin_external.py", line 40, in wrapper\n for entity in result:\n File "/opt/splunk/etc/apps/TA-tenable/bin/ta_tenable/splunktaucclib/rest_handler/handler.py", line 113, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "/opt/splunk/etc/apps/TA-tenable/bin/ta_tenable/splunktaucclib/rest_handler/handler.py", line 82, in wrapper\n check_existing(self, name),\n File "<string>", line 21, in validate\n File "/opt/splunk/etc/apps/TA-tenable/bin/ta_tenable/splunktaucclib/rest_handler/endpoint/__init__.py", line 82, in validate\n self._loop_fields('validate', name, data, existing=existing)\n File "/opt/splunk/etc/apps/TA-tenable/bin/ta_tenable/splunktaucclib/rest_handler/endpoint/__init__.py", line 78, in _loop_fields\n model.fields,\n File "/opt/splunk/etc/apps/TA-tenable/bin/ta_tenable/splunktaucclib/rest_handler/endpoint/__init__.py", line 77, in <lambda>\n lambda f: getattr(f, meth)(data, *args, **kwargs),\n File "/opt/splunk/etc/apps/TA-tenable/bin/ta_tenable/splunktaucclib/rest_handler/endpoint/field.py", line 51, in validate\n raise RestError(400, self.validator.msg)\nsplunktaucclib.rest_handler.error.RestError: REST Error [400]: Bad Request -- Please enter valid Address, Username and Password or configure valid proxy settings or verify SSL certificate.\n ta_tenable_securitycenter.log 11-19-2021 07:20:52.55,558 ERROR pid=5980 tid=MainThread file=v1.py:_request:497 | Requests Error: HTTPSConnectionPool(host='<fqdn>', port=443): Max retries exceeded with url: /rest/system (Caused by SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1106)')))      I've been troubleshooting this the last few days and any help would be appreciated.
I am trying to extract the name of log output but struggling with how to. I have this query <query>index=dap ("user login Time") </query> That I have a log that outputs log: [2m2021-11-19-t18:27:4... See more...
I am trying to extract the name of log output but struggling with how to. I have this query <query>index=dap ("user login Time") </query> That I have a log that outputs log: [2m2021-11-19-t18:27:42.996z [22m [34m auth [39m [32minfo user login time, Justin [ ... stream: stdout time:  2021-11-19-t18:2742.99648142z   I want to output only the time and username: Time:                                         user: 2021-11-19-t18:27:42.     Time, Justin
I have a Splunk deployment which is monitoring a fair number of network devices. One in particular is having an issue the past few weeks wherein Splunk will show well over a thousand events with the ... See more...
I have a Splunk deployment which is monitoring a fair number of network devices. One in particular is having an issue the past few weeks wherein Splunk will show well over a thousand events with the same _time down to the millisecond, and then the host will have no events show in Splunk until we reset the service. Some sample data following the query to gather it below. I have redacted device info from _raw beyond the device's event_id and device_time host=*redacted* | eval indextime = strftime(_indextime, "%Y-%m=%d %H:%M:%S:%N") | eval latency = _indextime - _time | sort device_time | table _time, indextime, latency, _raw ID _time indextime latency _raw 1 2021-11-18 17:02:03.000 2021-11-18 17:02:03.000 0 235463: 236034: Nov 18 2021 17:01:42.197: <other data> 2 2021-11-18 17:01:57.236 2021-11-18 17:04:07.000 129.764 235465: 236036: Nov 18 2021 17:02:14.200: <other data> ...         147 2021-11-18 17:01:57.236 2021-11-18 17:22:40.000 1242.764 235607: 236178: Nov 18 2021 17:22:39.196: <other data> 148 2021-11-18 17:22:39.199 2021-11-18 17:24:51.000 131.801 235609: 236180: Nov 18 2021 Nov 18 2021 17:22:40.008: <other data> 149 2021-11-18 17:22:39.199 2021-11-18 17:24:51.000 131.801 235610: 236181: Nov 18 2021 Nov 18 2021 17:22:40.226: <other data> 150  2021-11-18 17:22:39.199 2021-11-18 17:24:51.000 131.801 235611: 236182: Nov 18 2021 Nov 18 2021 17:22:41.099: <other data> 151  2021-11-18 17:22:39.199 2021-11-18 17:24:51.000 131.801 235612: 236183: Nov 18 2021 Nov 18 2021 17:22:54.084: <other data> 152  2021-11-18 17:22:39.199 2021-11-18 17:24:53.000 133.801 235613: 236184: Nov 18 2021 Nov 18 2021 17:23:15.428: <other data> ...         160  2021-11-18 17:22:39.199 2021-11-18 17:24:53.000 133.801 235621: 236192: Nov 18 2021 Nov 18 2021 17:23:26.087: <other data> 161  2021-11-18 17:22:39.199 2021-11-18 17:24:56.000 136.801 235622: 236193: Nov 18 2021 Nov 18 2021 17:23:26.087: <other data> ...         1329 2021-11-18 17:22:39.199 2021-11-18 21:29:24.000 14804.801 236781: 237364: Nov 18 2021 21:29:23.516: <other data>   Everything is working prior to ID 1, and after 1329 we have no data for about an hour or so until we reset Splunk. From ID 2 through ID 147, you can see the _time value is exactly the same, while the indextime continues to increment more or less appropriately with the device_time given in _raw IDs 148-152 show events with the same _time and _indextime values despite properly incrementing device_time from _raw IDs 152-160 show the same Then through to the end of the sample data, there are 1,173   events that get _time = 2021-11-18 17:22:39.199 with an average time between _indextime and device_time of 16.115s   From what I can see, it looks like the host is sending perfectly fine data to Splunk which is correctly indexing the events (_indextime) while assigning an incorrect event time (_time). Looking around here and trying to figure out what might be going wrong, I thought there might be an issue with some time settings somewhere. We have Splunk and the host in question set to the same timezone, and the host uses NTP to maintain a synchronized clock. Checking NTP, we haven't seen any issues surrounding these events.   We are quite open to any ideas here.
_time: 2021-11-19T11:34:02.000+0000 date_hour: 11 date_mday: 19 date_wday: friday   date_year: 2021 date_zone: -300 raw log snippet [19/Nov/2021:11:34:02 -0500] 2021-11-19T11:34:02.000+0000 i... See more...
_time: 2021-11-19T11:34:02.000+0000 date_hour: 11 date_mday: 19 date_wday: friday   date_year: 2021 date_zone: -300 raw log snippet [19/Nov/2021:11:34:02 -0500] 2021-11-19T11:34:02.000+0000 indicates UTC. Does this indicate timezone? 
Our networking team is looking to determine Log Rates for different systems reporting in Splunk. How can we determine how often a log is created for an individual system and determine the average si... See more...
Our networking team is looking to determine Log Rates for different systems reporting in Splunk. How can we determine how often a log is created for an individual system and determine the average size of these logs? Thank you!  
Hi - I can not down down load app from Splunk base from server.  Could you please let me know how to down load app on to my laptop from Splunk base . One I down load I want to move it to search head... See more...
Hi - I can not down down load app from Splunk base from server.  Could you please let me know how to down load app on to my laptop from Splunk base . One I down load I want to move it to search head server(not accessible from outside) and install on it   Thank you
Hi Splunkers,   My team is tackling an ingestion issue where we are seeing an overworked HF and I wanted to get the community's best practice on this type of problem.   My team is bringing in Cro... See more...
Hi Splunkers,   My team is tackling an ingestion issue where we are seeing an overworked HF and I wanted to get the community's best practice on this type of problem.   My team is bringing in Crowdstrike FDR logs via Crowdstrike add-on (Python Script API query) on our heavy forwarder. If we were to not filter, it would bring in 20+TB a day of logs, so we filter pretty heavily. Currently, our approach is to send everything to nullQueue and then pick events via Regex and send them to the indexQueue. We are seeing typingQueue blocks on this heavy forwarder which makes me think the approach may not be the best route. I think that the regex being performed in the pipeline may be overworking the HF. Any advice would be great! Our props for the sourcetype:   [CrowdStrike:Replicator:Data:JSON] INDEXED_EXTRACTIONS = JSON MAX_TIMESTAMP_LOOKAHEAD = 1024 TIME_FORMAT = %s%3N TZ=UTC TIME_PREFIX=\"timestamp\":\" LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = false BREAK_ONLY_BEFORE_DATE = false TRUNCATE = 150000 TRANSFORMS-setcsfdr= csfdr_log_setnull,csfdr_log_setparsing,csfdr_log_setfilter,csfdr_log_setfilter2   TRANSFORMS:   [csfdr_log_setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [csfdr_log_setparsing] REGEX = (?:"event_simpleName":".*Written"|"event_simpleName":"DnsRequest"|"event_simpleName":"NetworkConnectIP4"|"event_simpleName":"OciContainerInfo"|"event_simpleName":"ProcessRollup2"|"event_simpleName":"UserLogon.*"|"event_simpleName":"RemovableMediaVolumeMounted"|"event_simpleName":"FsVolumeUnmounted"|"event_simpleName":"FsVolumeMounted"|"event_simpleName":"SyntheticProcessRollup2"|"event_simpleName":"DmpFileWritten"|"event_simpleName":"OsVersionInfo") DEST_KEY = queue FORMAT = indexQueue [csfdr_log_setfilter] REGEX = (?:("ParentBaseFileName":("TPython\.exe"|"TANIUMCLIENT\.EXE"|"[Tt]anium[Cc]lient\.exe"|"[Tt]anium[Ee]xec[Ww]rapper\.exe"|"[Tt]anium[Cc]lient"|"[Ss]plunkd\.exe"))|("CommandLine":"\/bin(\/bash|\/sh)\s\/opt\/[Tt]anium\/[Tt]anium[Cc]lient)|("CommandLine":".*[Tt][Pp]ython\.exe\\")|("CommandLine":".*[Tt]anium.*")|("GrandParentBaseFileName":"[Tt]anium[Cc]lient\.exe")) DEST_KEY = queue FORMAT = nullQueue [csfdr_log_setfilter2] REGEX = (?:([Tt][Aa][Nn][Ii][Uu][Mm])) DEST_KEY = queue FORMAT = nullQueue ########NLSN
Hello. I am running 8.2.2 on Linux.  I have a SHC with three members. I have three indexes.  I would like to restrict the searchable index for each role and I would like to understand the best way ... See more...
Hello. I am running 8.2.2 on Linux.  I have a SHC with three members. I have three indexes.  I would like to restrict the searchable index for each role and I would like to understand the best way to distribute that change. I used the web GUI to create the roles, which the cluster replicated.  However, the GUI does not permit non-internal indexes to be deselected.  Therefore, I have edited authorize.conf on each member.  I am using srchIndexesDisallowed.  An account with role_user_a should only be able to search index_a.   The configuration below works, but how should I manage changes like this given the GUI limitation -- should I continue to edit the file directly (along with authentication.conf) going forward (and not use the GUI)? $ splunk btool --debug authorize list role_user_a /opt/splunk/etc/system/local/authorize.conf [role_user_a] /opt/splunk/etc/system/local/authorize.conf cumulativeRTSrchJobsQuota = 0 /opt/splunk/etc/system/local/authorize.conf cumulativeSrchJobsQuota = 0 /opt/splunk/etc/system/local/authorize.conf importRoles = user /opt/splunk/etc/system/default/authorize.conf rtSrchJobsQuota = 6 /opt/splunk/etc/system/default/authorize.conf run_collect = enabled /opt/splunk/etc/system/default/authorize.conf run_mcollect = enabled /opt/splunk/etc/system/default/authorize.conf schedule_rtsearch = enabled /opt/splunk/etc/system/default/authorize.conf srchDiskQuota = 100 /opt/splunk/etc/system/default/authorize.conf srchFilterSelecting = true /opt/splunk/etc/system/local/authorize.conf srchIndexesAllowed = index_a /opt/splunk/etc/system/local/authorize.conf srchIndexesDefault = index_a /opt/splunk/etc/system/local/authorize.conf srchIndexesDisallowed = index_b;index_c /opt/splunk/etc/system/default/authorize.conf srchJobsQuota = 3 Thanks for your help.
On Heavy Forwarder - deployed Rubrik Add-on as per the instruction provided in quick start guide. I am having an issue during the configuration >> Account tab, page does not load. I see a spinning wh... See more...
On Heavy Forwarder - deployed Rubrik Add-on as per the instruction provided in quick start guide. I am having an issue during the configuration >> Account tab, page does not load. I see a spinning wheel and message is "loading".  Also, "Add" button is missing.  Splunkd.log message :  ERROR AdminManagerExternal - unexpected error "<class 'splunkaucclib.rest_handler.error.RestError''>" from python handler" "Rest Error [500]: Internal Server Error -- Traceback (most recent call)
Hello, We have a chart in the dashboard, where the x-axis is the time. We defined a drilldown, where the $ts$ token should transmit the timestamp when the line chart is clicked. The point is, that ... See more...
Hello, We have a chart in the dashboard, where the x-axis is the time. We defined a drilldown, where the $ts$ token should transmit the timestamp when the line chart is clicked. The point is, that we need the $ts$ to be the unix format of the local time for the user and what comes is always the UTC. How would I transform the $ts$ token to represent the local time of the user and be in the unix timestamp form? Kind Regards, Kamil
Hello, I have 4 python scripts in HF. My plan is to run those python scripts automatically through my HF. How I would do that?  Thank you and any help will be highly appreciated.
Hi - trying to parse 2 similar sourcetypes with props.conf and transforms.conf but they are not working.  Help would be appreciated! Thanks! Example events: sourcetype=avaya:epm:mpplogs @2021-11-1... See more...
Hi - trying to parse 2 similar sourcetypes with props.conf and transforms.conf but they are not working.  Help would be appreciated! Thanks! Example events: sourcetype=avaya:epm:mpplogs @2021-11-19 09:41:54,070|PAVB_03335|INFO|VB|650636|Session=aipor-mpp001lv-2021323144040-7|Got VoiceXML exception: noinput in 9b99c62c5d35f81d18e547137018bef9663c3bc7a33f60a3f25aa4d55d36e14f|aipor-mpp001lv#### sourcetype=avaya:epm:vpmslogs @2021-11-19 09:51:10,411 EST||FINE|AppIntfService|VoicePortal|ajp-nio-127.0.0.1-3009-exec-41|Method=PackageInfo::GetBuildVersion()| attempt to locate file on classpath. File = VPAppIntfService.aar|||||||aipva-epm001lv|4000064385####   props.conf [avaya:epm:mpplogs] REPORT-pipe-separated-fields-mpp = pipe-separated-fields-mpp [avaya:epm:vpmslogs] REPORT-pipe-separated-fields-vpms = pipe-separated-fields-vpms   transforms.conf [pipe-separated-fields-mpp] DELIMS = "|" FIELDS = "eventTimestamp","eventName","eventLevel","triggerComponent","eventId","eventText","eventDescription","serverName" [pipe-separated-fields-vpms] DELIMS = "|" FIELDS = "eventTimestamp","eventName","eventLevel","triggerComponent","eventMonitor","eventDescription" (I've tried with and without quotes)