All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Experts, Am new to splunk.. I need to extract the fields which is in MSGTXT which are highlighted. Only when MSGTXT in  this format(SZ5114RA 00 1045 .06 .0 165K 2% 9728K 3% 400M") as there are d... See more...
Hi Experts, Am new to splunk.. I need to extract the fields which is in MSGTXT which are highlighted. Only when MSGTXT in  this format(SZ5114RA 00 1045 .06 .0 165K 2% 9728K 3% 400M") as there are different type message text also in the logs Example SZ5114RA as A 00 as B 1045 as C .06 as D .0 as E 165K as F 2% as G 9728K as H 3% as I 400M as J   Please help..!! thank you below is the Sample logs.. {"MFSOURCETYPE":"SYSLOG","DATETIME":"2021-10-16 02:24:47.53 +1100","SYSLOGSYSTEMNAME":"P01","JOBID":"SZ04","JOBNAME":"SZ04","SYSPLEX":"SYPLX1A","ACTION":"INFORMATIONAL","MSGNUM":"SZ5114RA","MSGTXT":"SZ5114RA 00 1045 .06 .0 165K 2% 9728K 3% 400M","MSGREQTYPE":""}   {"MFSOURCETYPE":"SYSLOG","DATETIME":"2021-10-16 02:24:47.54 +1100","SYSLOGSYSTEMNAME":"P01","JOBID":"SZ04","JOBNAME":"SZ04","SYSPLEX":"SYPLX1A","ACTION":"INFORMATIONAL","MSGNUM":"SZ04","MSGTXT":"SZ04 ENDED -SYS=P01 NAME=LIVE$SZ TOTAL CPU TIME= 12.4 TOTAL ELAPSED TIME= 47.2","MSGREQTYPE":""}      
Hello Team,  We want to integrate imperva cloud with splunk. We have install TA-Incapsula-cef in splunk however, unable to integrate with Splunk.  Kindly provide your support.
I want to list all the 'Authentication' related content we have created in the ES App. Is there any SPL query to get this. Need to list all the dashboards, Notable Events etc... of Authentication t... See more...
I want to list all the 'Authentication' related content we have created in the ES App. Is there any SPL query to get this. Need to list all the dashboards, Notable Events etc... of Authentication type. I would really appreciate any help.
Hi, we are created alerts for windows server availability (server status is shutting down) by using of Event codes (EventCode=41 OR EventCode=6006 OR EventCode=6008) now requirement is to create das... See more...
Hi, we are created alerts for windows server availability (server status is shutting down) by using of Event codes (EventCode=41 OR EventCode=6006 OR EventCode=6008) now requirement is to create dashboards for servers status (Running and shutting down).can you give me suggestions to create both (running and shutting down) status on one dashboard.
  Hi everyone, I'm looking for a search, that shows me when the health status of splunkd is changing from green to yellow or red... Would that be possible?  
Hi, I have checkboxes with 3 choices like below  i want to display the panel based on the checkbox selected . Ex: If A & B are selected then corresponding panel should be displayed. Similarly ... See more...
Hi, I have checkboxes with 3 choices like below  i want to display the panel based on the checkbox selected . Ex: If A & B are selected then corresponding panel should be displayed. Similarly for any other choices like B&C or A&C the corresponding panel should be displayed. Can you please help me the code. P.S: Currently i am able to display the panel if only one checkbox is selected or if all the three checkboxes are selected with condition based xml code. 
Hi, Im trying to create a single value with trendline visualisation, where I want to compare the difference between todays result with yesterday results. The trendline should be the results differen... See more...
Hi, Im trying to create a single value with trendline visualisation, where I want to compare the difference between todays result with yesterday results. The trendline should be the results difference of  yesterday and today.  I have applied several solutions, but the total number does not tally with the today's result. My base query is: index=emailgateway action=* from!="" to!="" | stats count which result shown as (today result) : Base Here are several solutions that I have tried:- Solution 1 Im using the trendline wma2  index=emailgateway action=* from!="" to!="" | timechart span=1d count as Total | trendline wma2("x") as Trend | sort - _time the result shown as below: Solution 1 - the result shows the trendline, but the total number (90,702) did not tally with today's result (227,019).    Solution 2 Im using the delta command :-  index=emailgateway action=* from!="" to!="" | timechart span=1d count as Total | delta Total p=1 as diference the result shown as below:  - the total number is different (including the trendline number)   Solution 3 I tried to use the |tstats command (from Enterprise Security) | tstats summariesonly=true allow_old_summaries=true count from datamodel=Email where (All_Email.action=* AND All_Email.orig_dest!="" OR All_Email.orig_src!="") earliest=-48h latest=-24h | append [| tstats summariesonly=true allow_old_summaries=true count from datamodel=Email where (All_Email.action=* AND All_Email.orig_dest!="" OR All_Email.orig_src!="") earliest=-24h latest=now] | appendcols [| makeresults | eval time=now() ] | rename time AS _time Solution 3 - which also did not work Can anyone help? Did i missed anything? Please.
I'm using mstats earliest_time(metric) to find the earliest time for metric. If I use    |mstats prestats=false earliest_time("http_req_duration_value") as "Start Time" where index=au_cpe_common_me... See more...
I'm using mstats earliest_time(metric) to find the earliest time for metric. If I use    |mstats prestats=false earliest_time("http_req_duration_value") as "Start Time" where index=au_cpe_common_metrics    it returns a "Start Time" like 1633986822.000000 I want to be able to display this time in human readable format on a dashboard however when I try  ~~~ |mstats prestats=false earliest_time("http_req_duration_value") as "Start Time" where index=au_cpe_common_metrics | eval STime2=strftime("Start Time", "%d/%m/%Y %H:%M:%S")|table STime2 ~~~ I get no results.  I'd also like to be able to subtract earliest_time from latest_time to get the duration of the event based on other dimensions.   I also tried prestats = true but it  returned no Time values in the events. What format is earliest time in and why can't  I format it or do calculations with the value? What is happening here? I'm new to operating with metric indexes
Hi I'm searching on an internet usage index for events that contain a particular word somewhere in the domain. For example the word could be "edublogs". From the result, I need to sum the total of ... See more...
Hi I'm searching on an internet usage index for events that contain a particular word somewhere in the domain. For example the word could be "edublogs". From the result, I need to sum the total of bytes downloaded for all events returned. The problem I'm having is needing to calculate a distinct count of centers and a distinct count of userids where the domain name only begins with "edublogs", e.g. "edublogs.com" and not when it doesn't, e.g.  "accountsedublogs.com". I know this example is wrong but can someone please help me with how I would change it to achieve the outcome below? index=searchdata domain="*edublogs*" | stats count dc(centre) AS distinctCenters | stats count dc(userid) AS distinctUserids | stats sum(bytes) AS totalBytes by domain | table domain totalBytes, distinctCenters, distinctUserids My desired results would look something like... domain totalBytes distinctCenters distinctUserids Senior 50000 15 321 Many thanks
msg:  INFO | 2021-10-14 10:38 PM |  Message consumed: {"InputAmountToCredit":"22.67","CurrencyCode":"AUD","Buid":"1401","OrderNumber":"877118406","ID":"58916"} In above sample data, I want to extrac... See more...
msg:  INFO | 2021-10-14 10:38 PM |  Message consumed: {"InputAmountToCredit":"22.67","CurrencyCode":"AUD","Buid":"1401","OrderNumber":"877118406","ID":"58916"} In above sample data, I want to extract "InputAmount" to credit.  Also I wan to extract "ID" and perform another search to get its status (as status is in another event and needs to be extracted by mapping ID)  This is for logs and I have lots of events to perform search on to generate report. Any help would be appreciated.
The answer to this probably stupid simple. Banging my head on this. Help and patience please.   I am writing a query which audits user role activations. A user can have many roles, i.e. Network Admi... See more...
The answer to this probably stupid simple. Banging my head on this. Help and patience please.   I am writing a query which audits user role activations. A user can have many roles, i.e. Network Admin, Security Reader, User Admin, Storage Operator, etc. Specifically, I want to view how many days has passed since a user activated a specific role.   Today - Last Activation = Days Since Last Activation   Questions: How can I most efficiently subtract the time of the most recent role activation event from the current time? How can I then only show the most recent results? How to then audit all users who have not activated a role in the past 30 days?   Base query for a single user search over 30 days and sample results are below.    index=audit user=bob@example.com | eval days = round((now() - _time)/86400) | table Role, user, _time, days | sort - days Sample Data Below Role UserName _time days Global Reader bob@example.com 2021-09-19T08:35:06.998 29 Global Reader bob@example.com 2021-09-19T08:35:05.514 29 Systems Administrator bob@example.com 2021-09-23T05:55:51.177 25 Systems Administrator bob@example.com 2021-09-23T05:55:49.036 25 Global Reader bob@example.com 2021-09-24T00:48:20.254 24 Storage Operator bob@example.com 2021-09-24T00:48:18.942 24 Systems Administrator bob@example.com 2021-09-27T07:22:23.971 21 Systems Administrator bob@example.com 2021-09-27T07:22:22.971 21 Global Reader bob@example.com 2021-09-27T07:19:40.569 21 Global Reader bob@example.com 2021-09-27T07:19:39.460 21   Desired results only show the most recent events  Role UserName _time days Global Reader bob@example.com 2021-09-24T00:48:20.254 24 Storage Operator bob@example.com 2021-09-24T00:48:18.942 24 Systems Administrator bob@example.com 2021-09-27T07:22:22.971 21 Global Reader bob@example.com 2021-09-27T07:19:39.460 21
Hello Splunk Community,  Can anyone help me build a query based on the below; I have a batch job that usually starts at 5.30pm everyday and finishes in the early morning of the following day. The b... See more...
Hello Splunk Community,  Can anyone help me build a query based on the below; I have a batch job that usually starts at 5.30pm everyday and finishes in the early morning of the following day. The batch job has multiple steps logged as separate events and has no unique id to link the step to the batch job. I want to create a timechart which shows the total duration (step 1 to 5) for each batch job occurring daily. Example of 1 batch job start & end time (Dummy Data Used): Step Start_Time End_Time 1 2021-09-11 17:30:00 2021-09-11 23:45:01 2 2021-09-11 23:45:01 2021-09-12 01:45:20 3 2021-09-12 01:45:20 2021-09-12 02:35:20 4 2021-09-12 02:35:20 2021-09-12 03:04:25 5 2021-09-12 03:04:25 2021-09-12 05:23:06   I hope someone can figure this one out as i have been stuck on it for a few days.  Many Thanks,  Zoe
I created my Splunk instance and created a username/password. I am unable to log in to Splunk. HELP:(
Hi -   I have a production outage and I am really struggling to fix it - I have had my Unix admin on and Splunk support however they can't pinpoint the issue.   Any help would be amazi... See more...
Hi -   I have a production outage and I am really struggling to fix it - I have had my Unix admin on and Splunk support however they can't pinpoint the issue.   Any help would be amazing, please...I have 1 SH 1 MD and 3 indexers cluster.   On the SH every X minute, i get "waiting for data" and it just hangs there for up to 60 seconds, all the screens 100% unusable. If i run a search on the MD or Indexers its all fine.We did turn on some verbose logs on in log.cfg [python] splunk = DEBUG   And we got this error that i cant really find on google.   2021-10-17 13:07:49,790 DEBUG [616c036bfc7fa6567ccfd0] cached:69 - Memoizer expired cachekey=('j_ycUrLUdujLkZGGP2BX^rH^0aBu4qOEaCTBx89kc4teMYUq_MasVgLLGH9T^lr900ey_UYle7JSPVCMiA1vQiPTChSEQrNiPxobUi1Ut_VKQAwW1UC8H8R8fxXyODmliNS8', <function getServerZoneInfo at 0x7fa655877440>, (), frozenset())  2021-10-17 13:07:49,791 DEBUG [616c036bfc7fa6567ccfd0] cached:69 - Memoizer expired cachekey=('j_ycUrLUdujLkZGGP2BX^rH^0aBu4qOEaCTBx89kc4teMYUq_MasVgLLGH9T^lr900ey_UYle7JSPVCMiA1vQiPTChSEQrNiPxobUi1Ut_VKQAwW1UC8H8R8fxXyODmliNS8', <function isModSetup at 0x7fa656048290>, ('Murex',), frozenset()) 2021-10-17 13:14:20,027 DEBUG [616c036bfc7fa6567ccfd0] cached:69 - Memoizer expired cachekey=('Qq597goWUdHcWpAp40yLt648IoIOsjngeZUNYmko18k_8LehDC7ZD0Daauwm0vMgbCiNehFe0KYbLHY3m^XFDWqHZPZM1w01V02phdaScMSdDFRrEu8_Q50t1lA5QRBVaiVL6N', <function getEntities at 0x7fa6560a2d40>, ('data/ui/manager',), frozenset({('count', -1), ('namespace', 'Murex')})) 6:50 Any help would be so good, as i am in the process of thinking of installing the apps onto the search head for tomorrow Production, i think that will work?
This is the sample accessLog which is coming up in the Splunk ui. {"timestamp":"2021-10-17T15:03:56,763Z","level":"INFO","thread":"reactor-http-epolpl-20","message":"method=GET, uri=/api/v1/hello1... See more...
This is the sample accessLog which is coming up in the Splunk ui. {"timestamp":"2021-10-17T15:03:56,763Z","level":"INFO","thread":"reactor-http-epolpl-20","message":"method=GET, uri=/api/v1/hello1, status=200, duration=1, "logger":"reactor.netty.http.server.AccessLog"} {"timestamp":"2021-10-17T15:03:56,763Z","level":"INFO","thread":"reactor-http-epolpl-20","message":"method=GET, uri=/api/v1/dummy1, status=200, duration=1, "logger":"reactor.netty.http.server.AccessLog"} I want to extract the url and make a count for all the API's like how many times an API is hitted from the uri part(uri=/api/v1/dummy1). I tried the below query, but it's not giving the desired result. index=dummy OR index=dummy1 source=*dummy-service* logger=reactor.netty.http.server.AccessLog | rex field=message "(?<url>uri.[\/api\/v1\/hello]+)" | chart count by url Can someone help in this ? 
How to use "whois" .apps "network tools" doesn't work. "lookup whois" does not work. are there other valid applications? or how to write a request.  username src_ip John 82.*.*.* Smit 17... See more...
How to use "whois" .apps "network tools" doesn't work. "lookup whois" does not work. are there other valid applications? or how to write a request.  username src_ip John 82.*.*.* Smit 172.*.*.*
Help me write the request correctly.  user src_ip Jonh 82.*.*.* Smit 172.*.*.* the documentation says that these lines are required. How to write code to get data correctly. via whoi... See more...
Help me write the request correctly.  user src_ip Jonh 82.*.*.* Smit 172.*.*.* the documentation says that these lines are required. How to write code to get data correctly. via whois by scr_ip https://github.com/doksu/TA-centralops/wiki | lookup local=t centralopswhois_cache _key AS domain | centralopswhois output=json limit=2 domain    
Hi:    We have some  hardware capture NIC can accelerate packet capture,But it need load special libpcap.so not os default libpcap. I use ldd and can not find  stream load any version libpcap.so. ... See more...
Hi:    We have some  hardware capture NIC can accelerate packet capture,But it need load special libpcap.so not os default libpcap. I use ldd and can not find  stream load any version libpcap.so. So my question is can how can I load my owen libpcap.so and accelerate splunk stream packet capture? Is that porssible?  
I have one 1 primary index namely azure with 2 sourcetypes namely: mscs:kube-good and mscs:kube-audit-good.  I believe they could be duplication of data logs between the 2 sourcetypes. What is the sp... See more...
I have one 1 primary index namely azure with 2 sourcetypes namely: mscs:kube-good and mscs:kube-audit-good.  I believe they could be duplication of data logs between the 2 sourcetypes. What is the splunk queries that can tell me if there is duplication of logs between the 2 sourcetypes. Do they each have information that the other doesn't contain. Is there a lot of overlap? Please give me the splunk queries that will do this job.
Is the "Free Splunk" free for business use please?