All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to extract the name of log output but struggling with how to. I have this query <query>index=dap ("user login Time") </query> That I have a log that outputs log: [2m2021-11-19-t18:27:4... See more...
I am trying to extract the name of log output but struggling with how to. I have this query <query>index=dap ("user login Time") </query> That I have a log that outputs log: [2m2021-11-19-t18:27:42.996z [22m [34m auth [39m [32minfo user login time, Justin [ ... stream: stdout time:  2021-11-19-t18:2742.99648142z   I want to output only the time and username: Time:                                         user: 2021-11-19-t18:27:42.     Time, Justin
I have a Splunk deployment which is monitoring a fair number of network devices. One in particular is having an issue the past few weeks wherein Splunk will show well over a thousand events with the ... See more...
I have a Splunk deployment which is monitoring a fair number of network devices. One in particular is having an issue the past few weeks wherein Splunk will show well over a thousand events with the same _time down to the millisecond, and then the host will have no events show in Splunk until we reset the service. Some sample data following the query to gather it below. I have redacted device info from _raw beyond the device's event_id and device_time host=*redacted* | eval indextime = strftime(_indextime, "%Y-%m=%d %H:%M:%S:%N") | eval latency = _indextime - _time | sort device_time | table _time, indextime, latency, _raw ID _time indextime latency _raw 1 2021-11-18 17:02:03.000 2021-11-18 17:02:03.000 0 235463: 236034: Nov 18 2021 17:01:42.197: <other data> 2 2021-11-18 17:01:57.236 2021-11-18 17:04:07.000 129.764 235465: 236036: Nov 18 2021 17:02:14.200: <other data> ...         147 2021-11-18 17:01:57.236 2021-11-18 17:22:40.000 1242.764 235607: 236178: Nov 18 2021 17:22:39.196: <other data> 148 2021-11-18 17:22:39.199 2021-11-18 17:24:51.000 131.801 235609: 236180: Nov 18 2021 Nov 18 2021 17:22:40.008: <other data> 149 2021-11-18 17:22:39.199 2021-11-18 17:24:51.000 131.801 235610: 236181: Nov 18 2021 Nov 18 2021 17:22:40.226: <other data> 150  2021-11-18 17:22:39.199 2021-11-18 17:24:51.000 131.801 235611: 236182: Nov 18 2021 Nov 18 2021 17:22:41.099: <other data> 151  2021-11-18 17:22:39.199 2021-11-18 17:24:51.000 131.801 235612: 236183: Nov 18 2021 Nov 18 2021 17:22:54.084: <other data> 152  2021-11-18 17:22:39.199 2021-11-18 17:24:53.000 133.801 235613: 236184: Nov 18 2021 Nov 18 2021 17:23:15.428: <other data> ...         160  2021-11-18 17:22:39.199 2021-11-18 17:24:53.000 133.801 235621: 236192: Nov 18 2021 Nov 18 2021 17:23:26.087: <other data> 161  2021-11-18 17:22:39.199 2021-11-18 17:24:56.000 136.801 235622: 236193: Nov 18 2021 Nov 18 2021 17:23:26.087: <other data> ...         1329 2021-11-18 17:22:39.199 2021-11-18 21:29:24.000 14804.801 236781: 237364: Nov 18 2021 21:29:23.516: <other data>   Everything is working prior to ID 1, and after 1329 we have no data for about an hour or so until we reset Splunk. From ID 2 through ID 147, you can see the _time value is exactly the same, while the indextime continues to increment more or less appropriately with the device_time given in _raw IDs 148-152 show events with the same _time and _indextime values despite properly incrementing device_time from _raw IDs 152-160 show the same Then through to the end of the sample data, there are 1,173   events that get _time = 2021-11-18 17:22:39.199 with an average time between _indextime and device_time of 16.115s   From what I can see, it looks like the host is sending perfectly fine data to Splunk which is correctly indexing the events (_indextime) while assigning an incorrect event time (_time). Looking around here and trying to figure out what might be going wrong, I thought there might be an issue with some time settings somewhere. We have Splunk and the host in question set to the same timezone, and the host uses NTP to maintain a synchronized clock. Checking NTP, we haven't seen any issues surrounding these events.   We are quite open to any ideas here.
_time: 2021-11-19T11:34:02.000+0000 date_hour: 11 date_mday: 19 date_wday: friday   date_year: 2021 date_zone: -300 raw log snippet [19/Nov/2021:11:34:02 -0500] 2021-11-19T11:34:02.000+0000 i... See more...
_time: 2021-11-19T11:34:02.000+0000 date_hour: 11 date_mday: 19 date_wday: friday   date_year: 2021 date_zone: -300 raw log snippet [19/Nov/2021:11:34:02 -0500] 2021-11-19T11:34:02.000+0000 indicates UTC. Does this indicate timezone? 
Our networking team is looking to determine Log Rates for different systems reporting in Splunk. How can we determine how often a log is created for an individual system and determine the average si... See more...
Our networking team is looking to determine Log Rates for different systems reporting in Splunk. How can we determine how often a log is created for an individual system and determine the average size of these logs? Thank you!  
Hi - I can not down down load app from Splunk base from server.  Could you please let me know how to down load app on to my laptop from Splunk base . One I down load I want to move it to search head... See more...
Hi - I can not down down load app from Splunk base from server.  Could you please let me know how to down load app on to my laptop from Splunk base . One I down load I want to move it to search head server(not accessible from outside) and install on it   Thank you
Hi Splunkers,   My team is tackling an ingestion issue where we are seeing an overworked HF and I wanted to get the community's best practice on this type of problem.   My team is bringing in Cro... See more...
Hi Splunkers,   My team is tackling an ingestion issue where we are seeing an overworked HF and I wanted to get the community's best practice on this type of problem.   My team is bringing in Crowdstrike FDR logs via Crowdstrike add-on (Python Script API query) on our heavy forwarder. If we were to not filter, it would bring in 20+TB a day of logs, so we filter pretty heavily. Currently, our approach is to send everything to nullQueue and then pick events via Regex and send them to the indexQueue. We are seeing typingQueue blocks on this heavy forwarder which makes me think the approach may not be the best route. I think that the regex being performed in the pipeline may be overworking the HF. Any advice would be great! Our props for the sourcetype:   [CrowdStrike:Replicator:Data:JSON] INDEXED_EXTRACTIONS = JSON MAX_TIMESTAMP_LOOKAHEAD = 1024 TIME_FORMAT = %s%3N TZ=UTC TIME_PREFIX=\"timestamp\":\" LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = false BREAK_ONLY_BEFORE_DATE = false TRUNCATE = 150000 TRANSFORMS-setcsfdr= csfdr_log_setnull,csfdr_log_setparsing,csfdr_log_setfilter,csfdr_log_setfilter2   TRANSFORMS:   [csfdr_log_setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [csfdr_log_setparsing] REGEX = (?:"event_simpleName":".*Written"|"event_simpleName":"DnsRequest"|"event_simpleName":"NetworkConnectIP4"|"event_simpleName":"OciContainerInfo"|"event_simpleName":"ProcessRollup2"|"event_simpleName":"UserLogon.*"|"event_simpleName":"RemovableMediaVolumeMounted"|"event_simpleName":"FsVolumeUnmounted"|"event_simpleName":"FsVolumeMounted"|"event_simpleName":"SyntheticProcessRollup2"|"event_simpleName":"DmpFileWritten"|"event_simpleName":"OsVersionInfo") DEST_KEY = queue FORMAT = indexQueue [csfdr_log_setfilter] REGEX = (?:("ParentBaseFileName":("TPython\.exe"|"TANIUMCLIENT\.EXE"|"[Tt]anium[Cc]lient\.exe"|"[Tt]anium[Ee]xec[Ww]rapper\.exe"|"[Tt]anium[Cc]lient"|"[Ss]plunkd\.exe"))|("CommandLine":"\/bin(\/bash|\/sh)\s\/opt\/[Tt]anium\/[Tt]anium[Cc]lient)|("CommandLine":".*[Tt][Pp]ython\.exe\\")|("CommandLine":".*[Tt]anium.*")|("GrandParentBaseFileName":"[Tt]anium[Cc]lient\.exe")) DEST_KEY = queue FORMAT = nullQueue [csfdr_log_setfilter2] REGEX = (?:([Tt][Aa][Nn][Ii][Uu][Mm])) DEST_KEY = queue FORMAT = nullQueue ########NLSN
Hello. I am running 8.2.2 on Linux.  I have a SHC with three members. I have three indexes.  I would like to restrict the searchable index for each role and I would like to understand the best way ... See more...
Hello. I am running 8.2.2 on Linux.  I have a SHC with three members. I have three indexes.  I would like to restrict the searchable index for each role and I would like to understand the best way to distribute that change. I used the web GUI to create the roles, which the cluster replicated.  However, the GUI does not permit non-internal indexes to be deselected.  Therefore, I have edited authorize.conf on each member.  I am using srchIndexesDisallowed.  An account with role_user_a should only be able to search index_a.   The configuration below works, but how should I manage changes like this given the GUI limitation -- should I continue to edit the file directly (along with authentication.conf) going forward (and not use the GUI)? $ splunk btool --debug authorize list role_user_a /opt/splunk/etc/system/local/authorize.conf [role_user_a] /opt/splunk/etc/system/local/authorize.conf cumulativeRTSrchJobsQuota = 0 /opt/splunk/etc/system/local/authorize.conf cumulativeSrchJobsQuota = 0 /opt/splunk/etc/system/local/authorize.conf importRoles = user /opt/splunk/etc/system/default/authorize.conf rtSrchJobsQuota = 6 /opt/splunk/etc/system/default/authorize.conf run_collect = enabled /opt/splunk/etc/system/default/authorize.conf run_mcollect = enabled /opt/splunk/etc/system/default/authorize.conf schedule_rtsearch = enabled /opt/splunk/etc/system/default/authorize.conf srchDiskQuota = 100 /opt/splunk/etc/system/default/authorize.conf srchFilterSelecting = true /opt/splunk/etc/system/local/authorize.conf srchIndexesAllowed = index_a /opt/splunk/etc/system/local/authorize.conf srchIndexesDefault = index_a /opt/splunk/etc/system/local/authorize.conf srchIndexesDisallowed = index_b;index_c /opt/splunk/etc/system/default/authorize.conf srchJobsQuota = 3 Thanks for your help.
On Heavy Forwarder - deployed Rubrik Add-on as per the instruction provided in quick start guide. I am having an issue during the configuration >> Account tab, page does not load. I see a spinning wh... See more...
On Heavy Forwarder - deployed Rubrik Add-on as per the instruction provided in quick start guide. I am having an issue during the configuration >> Account tab, page does not load. I see a spinning wheel and message is "loading".  Also, "Add" button is missing.  Splunkd.log message :  ERROR AdminManagerExternal - unexpected error "<class 'splunkaucclib.rest_handler.error.RestError''>" from python handler" "Rest Error [500]: Internal Server Error -- Traceback (most recent call)
Hello, We have a chart in the dashboard, where the x-axis is the time. We defined a drilldown, where the $ts$ token should transmit the timestamp when the line chart is clicked. The point is, that ... See more...
Hello, We have a chart in the dashboard, where the x-axis is the time. We defined a drilldown, where the $ts$ token should transmit the timestamp when the line chart is clicked. The point is, that we need the $ts$ to be the unix format of the local time for the user and what comes is always the UTC. How would I transform the $ts$ token to represent the local time of the user and be in the unix timestamp form? Kind Regards, Kamil
Hello, I have 4 python scripts in HF. My plan is to run those python scripts automatically through my HF. How I would do that?  Thank you and any help will be highly appreciated.
Hi - trying to parse 2 similar sourcetypes with props.conf and transforms.conf but they are not working.  Help would be appreciated! Thanks! Example events: sourcetype=avaya:epm:mpplogs @2021-11-1... See more...
Hi - trying to parse 2 similar sourcetypes with props.conf and transforms.conf but they are not working.  Help would be appreciated! Thanks! Example events: sourcetype=avaya:epm:mpplogs @2021-11-19 09:41:54,070|PAVB_03335|INFO|VB|650636|Session=aipor-mpp001lv-2021323144040-7|Got VoiceXML exception: noinput in 9b99c62c5d35f81d18e547137018bef9663c3bc7a33f60a3f25aa4d55d36e14f|aipor-mpp001lv#### sourcetype=avaya:epm:vpmslogs @2021-11-19 09:51:10,411 EST||FINE|AppIntfService|VoicePortal|ajp-nio-127.0.0.1-3009-exec-41|Method=PackageInfo::GetBuildVersion()| attempt to locate file on classpath. File = VPAppIntfService.aar|||||||aipva-epm001lv|4000064385####   props.conf [avaya:epm:mpplogs] REPORT-pipe-separated-fields-mpp = pipe-separated-fields-mpp [avaya:epm:vpmslogs] REPORT-pipe-separated-fields-vpms = pipe-separated-fields-vpms   transforms.conf [pipe-separated-fields-mpp] DELIMS = "|" FIELDS = "eventTimestamp","eventName","eventLevel","triggerComponent","eventId","eventText","eventDescription","serverName" [pipe-separated-fields-vpms] DELIMS = "|" FIELDS = "eventTimestamp","eventName","eventLevel","triggerComponent","eventMonitor","eventDescription" (I've tried with and without quotes)
Hello. I need help solving this. I have the UF installed on RHEL Server 7.9. Underneath that server is a RHEL 7.9 machine. This machine does not have the UF installed, is not connected to the domain.... See more...
Hello. I need help solving this. I have the UF installed on RHEL Server 7.9. Underneath that server is a RHEL 7.9 machine. This machine does not have the UF installed, is not connected to the domain. Is only connected to the Server through the NIC. All of the machines logs are forwarded to the Server through rsyslog. Then the Server with the UF installed, forwards both machines logs to the Splunk server. Everything works great. Both of these machines have ClamAV installed. I need to be able to see the machines clamav defs in the Splunk dashboard. How can I do that?
After using multiple append=t and prestat=t I am unable to use stats to capture the data into one nice line, as one of the tstat data might be late. Is it possible to get Splunk to take the last va... See more...
After using multiple append=t and prestat=t I am unable to use stats to capture the data into one nice line, as one of the tstat data might be late. Is it possible to get Splunk to take the last value (if it does not exist) of each of the columns and place it at the end.      | mstats append=t prestats=t min("mx.service.status") min(mx.service.dependencies.status) min(mx.service.resources.status) min("mx.service.deployment.status") max("mx.service.replicas") WHERE "index"="metrics_test" service.type IN (agent-based launcher-based) AND mx.env=http://mx20267vm:15000 span=10s BY "service.name" "service.type" | mstats append=t prestats=t max("mx.service.replicas") WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 service.type IN (agent-based launcher-based) span=10s BY service.name expected.count | mstats append=t prestats=t min("mx.service.deployment.status") max("mx.service.replicas") WHERE "index"="metrics_test" service.type IN (agent-based launcher-based) AND mx.env=http://mx20267vm:15000 span=10s BY "service.name" "service.type" forked | rename service.name as Service_Name,service.type as Service_Type     In the below image you can see in orange for this time 13:51:30, that only some of the data arrived at that time. The issue is if I do a stats on that and take the 13:51:30 "Status_numeric" + "Dependencies" are blank. I have tried streamstats and it kind of works but in this case (below), Deployment did not get a value. Also, i don't know how to get forked and Expected to the last time stamp...any help would be great thanks   
Hello @alacercogitatus  Google Workspace for Splunk addon throws an error Installed Add ons "Google Workspace for Splunk" configured google workspace account, able to see the users list in splunk... See more...
Hello @alacercogitatus  Google Workspace for Splunk addon throws an error Installed Add ons "Google Workspace for Splunk" configured google workspace account, able to see the users list in splunk, "Admin SDK Reports Ingest" configuration throws below error for service "login" and "user_account" error_message="invalid literal for int() with base 10: '1636719173.276311'" error_type="&lt;class 'ValueError'&gt;" error_arguments="invalid literal for int() with base 10: '1636719173.276311'" error_filename="google_client.py" error_line_number="863" input_guid="840262-6716-ff73-e5c-c0816800774" input_name="Account"
Hi all, What would be a simply approach to creating an alert based on the following log data: The objective is to send an alert if the "Return Code" does not equal the number "1" # Reporting Start... See more...
Hi all, What would be a simply approach to creating an alert based on the following log data: The objective is to send an alert if the "Return Code" does not equal the number "1" # Reporting Started # ##################### # Processing task 1 # Processing task 2 # Processing task 3 ##################### # Return Code 1 TIA    
Hi all, I have a problem that I cannot solve. I have data that is a result of a loadjob where the fields are named 0_PREVIOUS_MONTH, 1_PREVIOUS_MONTH, 2_PREVIOUS_MONTH, ..... 12_PREVIOUS_MONTH. I ... See more...
Hi all, I have a problem that I cannot solve. I have data that is a result of a loadjob where the fields are named 0_PREVIOUS_MONTH, 1_PREVIOUS_MONTH, 2_PREVIOUS_MONTH, ..... 12_PREVIOUS_MONTH. I would like to add the values of the fields starting from 1/4 up to the current month .. Let me explain with an example: today we are in November, I need the sum of a line that starts from April 1st until today. So if I do the current month -4 + 1 = 8, I have to add: 4_PREVIOUS_MONTH + 5_PREVIOUS_MONTH + .... + 11_PREVIOUS_MONTH which is exactly 8 months. I thought of a foreach with this syntax: | foreach * _PREVIOUS_MONTH [eval TOTAL = TOTAL + if (* _ PREVIOUS_MONTH> = 4, <<FIELD>>, 0)] but it does not work. You can help me? I'm going crazy to find a solution Tks Bye Antonio
I am getting success percentage from the query as 97.00% and my requirement is to add an alert when success percentage is below 95.00% i am getting success % from below query please suggest t... See more...
I am getting success percentage from the query as 97.00% and my requirement is to add an alert when success percentage is below 95.00% i am getting success % from below query please suggest the query to add an alert when successrate is 95.00% in one hour span
Hello community, My client has experienced a severe issue on a Search Head Cluster on past days due to the scheduler behavior. We had a scheduled search to long to run between to schedules that was... See more...
Hello community, My client has experienced a severe issue on a Search Head Cluster on past days due to the scheduler behavior. We had a scheduled search to long to run between to schedules that was having concurrent jobs running (although its concurrency_limit =1). After a while, the scheduler started a burst of deferred (up ot 26k defer by minute for ~600 unitary savedsearches). The particulary strange behavior reside for me in the concurrency_limit set on deferred schedules during the burst: 408 instead of usual 1. (408 corresponds to the maximum search concurrency of the SHC (4 SH x102)) The burst terminated by itself after a while. We experienced several other burst in lower proportion, the big episodes have disapeared after the correction of the scheduled search previously mentionned. Do you guys have any idea about the reason of the concurrency _limit change on the fly ? (no change performed by human) The graph of deferred events by concurrency _limit - concurrency _limit=408 is on an overlay to see the global behavior with other values. index="_internal" AND sourcetype=scheduler AND host=<MySHMaster> status="continued" earliest=-72h | timechart span=1min count by concurrency_limit Regards,
I am using below query, index=A sourcetype IN (Compare,Fire)| fillnull value="" | search Name="*SWZWZQ0001*" OR Name="*SADAPP0002*" OR Name="*SALINU0016*" OR Name="*SGGRNP1002*" | stats values(*) ... See more...
I am using below query, index=A sourcetype IN (Compare,Fire)| fillnull value="" | search Name="*SWZWZQ0001*" OR Name="*SADAPP0002*" OR Name="*SALINU0016*" OR Name="*SGGRNP1002*" | stats values(*) as * by sysid |eval Status=if(F_Agent_Version ="" AND C_Agent_Version ="","Not Covered","Covered") | table sourcetype sysid Name F_Agent_Version C_Agent_Version Status   sourcetype ITAM_sysid ITAM Name Fire Agent Version Compare Agent Version Status Compare      Fire 0003fb SALINU0016 32.30. 6.3 Not Covered Compare                    Fire 003fcb SGGRNP1002 29.7   Not Covered Fire 0d456 SADAPP0002 32.3   Covered Compare 0d526 SWZWZQ0001     Not Covered  Due to the null's in the first and second rows (SALINU0016,SGGRNP1002) for Agent_version and Compare Agent Version , i am getting not covered instead of covered.Please let me know ,how to get rid of nulls and make the status Covered .
Splunk app for AWS - config loading issue  Using enterprise and app for aws both are latest versions . Please find the screenshot below. Help me to resolve the issue