All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We have successfully implemented the taxii feed from NH-ISAC and are looking for examples or use cases from others that currently work with this feed.
Hi! I have this field in my log: callerSipNumber="18121710_text". How should I extract "18121710" and name it "number"? I've tried |rex field=callerSipNumber "((?)[_\w+])". But it didn't give an... See more...
Hi! I have this field in my log: callerSipNumber="18121710_text". How should I extract "18121710" and name it "number"? I've tried |rex field=callerSipNumber "((?)[_\w+])". But it didn't give anything. Thanks a lot!
This is a little tricky to explain but I have this query: index = active_directory directReports=* sAMAccountName=* | rex field=directReports max_match=0 "CN=(?<memberOf>[^,]+)" | rex field=memb... See more...
This is a little tricky to explain but I have this query: index = active_directory directReports=* sAMAccountName=* | rex field=directReports max_match=0 "CN=(?<memberOf>[^,]+)" | rex field=memberOf mode=sed "s/\./ /g" | rename sAMAccountName as Manager memberOf as Employee | table Manager Employee This displays as manager in column 1 and lists employees in column 2 : How can I unlist the employees as separate rows as follows Manager employee Manager employee Manager employee Manager employee TIA
Hello Splunkers! Quick questions, is it ok to "jump" maintenance versions on the same release? For example: if I want to go to 7.3.4 from 7.3.2 can I follow the path #1 or path #2? 7.3.2 => 7... See more...
Hello Splunkers! Quick questions, is it ok to "jump" maintenance versions on the same release? For example: if I want to go to 7.3.4 from 7.3.2 can I follow the path #1 or path #2? 7.3.2 => 7.3.3 => 7.3.4 7.3.2 => 7.3.4 Thanks.
Hi, We are getting data from syslog for ssl vpn login. Here is a sample log. ,,"'0'",,"'-'",,"Thor","'Tunnel'","MCU","'192.168.1.8:0'",,,"'14711'","'197'","'-'","'0'",,,,"Restricted Users" ,"... See more...
Hi, We are getting data from syslog for ssl vpn login. Here is a sample log. ,,"'0'",,"'-'",,"Thor","'Tunnel'","MCU","'192.168.1.8:0'",,,"'14711'","'197'","'-'","'0'",,,,"Restricted Users" ,"'W'",,"'0x101'","'[::ffff:xxx.xxx.xx.xxx]:13996'","'11343'",, ,"'(Thor)@(BRANCH) (CN=Thor,OU=Restricted Users,OU=MCU (VDI),OU=MCU,DC=MCU,DC=com)'","'-'" ,"Mar 16 03:21:03 192.168.2.92 Mar 16 13:21:03 SSLVPN02 logserver: [16/Mar/2020:13:21:03.645800 +0300] ADMSSLVPN02 000000 kt 00000000 Info Audit Src='[::ffff:xxx.xxx.xx.xxx]:13996' Auth='-' User='(Thor)@(BRANCH) (CN=Thor,OU=Restricted Users,OU=MCU (VDI),OU=MCU,DC=MCU,DC=com)' SocksVersion='0x101' Command='Tunnel' Dest='192.168.3.80:0' Error='0' SrcBytes='11343' DstBytes='14711' Duration='197' VirtualHost='-' PlatformPrefix='W' EquipmentId='-' AppNumber='0'","2020-03-16T13:21:03.000-0700",,3,16,21,march,3,monday,2020,local,,,,"192.168.2.92",main,,1,,,logserver ,"::_...:::[//:::.+]___________='[:::...]:","udp:514",syslog,"Splunk-WIN",,15,0 Here is the query I've written. extracted_source="udp:514" Duration="" Tunnel CN="Appa" | rex field=Duration "\'(?P.+)\'" | rex field=Src ":(?P.+):" | eval Start_Time = strftime(strptime(time, "%Y-%m-%dT%H:%M:%S")-Duration, "%D-%T") | eval Duration(minutes)=Round(Duration/60) | eval End_Time = strftime(strptime(time, "%Y-%m-%dT%H:%M:%S"), "%D-%T") | eval User=CN | eval SourceIP=replace (SourceIP, ":ffff:", "") | eval SourceIP=replace (SourceIP, "]", "") | table CN,Start_Time,SourceIP,End_Time,Duration(minutes) this gives me a result like this. CN Start_Time SourceIP End_Time Dura(minutes) Thor 03/16/20-12:20:46 xxx.xxx.xx.xxx 03/16/20-12:59:02 38 Thor 03/16/20-12:58:57 xxx.xxx.xx.xxx 03/16/20-13:08:14 9 Thor 03/16/20-13:08:21 xxx.xxx.xx.xxx 03/16/20-13:10:11 2 Thor 03/16/20-13:10:18 xxx.xxx.xx.xxx 03/16/20-13:12:02 2 Thor 03/16/20-13:12:05 xxx.xxx.xx.xxx 03/16/20-13:17:40 6 Thor 03/16/20-13:17:46 xxx.xxx.xx.xxx 03/16/20-13:21:03 3 Thor 03/16/20-13:21:12 xxx.xxx.xx.xxx 03/16/20-14:12:57 52 I need to make it concise. Desired output I'm trying to get will look like this. CN Start_Time End_Time Dura(minutes) Thor 03/16/20-12:20:46 03/16/20-14:12:57 112 CN = UserName Start_Time = first session's timestamp End_Time = last session's timestamp Dura(minutes) = sum of duration of all sessions How can I achieve this.?
Hello Splunkers, I have a trouble with the result, example i have some data log Goat | alive Goat | dead Goat | alive Rabit | alive Rabit | dead my trouble is , how to get data coun... See more...
Hello Splunkers, I have a trouble with the result, example i have some data log Goat | alive Goat | dead Goat | alive Rabit | alive Rabit | dead my trouble is , how to get data count alive or dead , example a Goat (alive =2 , dead = 1) diff = alive - dead (1) , and Rabit(alive=1 , dead=1) diff = alive - dead (0), i want to create table of result Animal | alive | dead | diff Goat | 2 | 1 | 1 Rabit | 1 | 1 | 0 please help me for the query, thank you splunkers
Hello, In our system, Splunk REST API does a login call to Splunk in one of the search heads and gets a token from that search head. Later on, the load balancer changes the search head that Splu... See more...
Hello, In our system, Splunk REST API does a login call to Splunk in one of the search heads and gets a token from that search head. Later on, the load balancer changes the search head that Splunk REST API is connected to. The service is still logged into the previous search head with the token from that search head, but it's sending this token to a different search head which still hasn't been logged into. How can I solve this issue now that we are running with search head cluster? Thanks
When I click the chart my drilldown down is not working. But when I remove the "|eval AAA=case(like(o,"%Win%"),"Win",like(o,"%Lin%"),"Linux",like(o,"%Missing%"),"Others",like(o,"%So%"),"Sol",like(o,"... See more...
When I click the chart my drilldown down is not working. But when I remove the "|eval AAA=case(like(o,"%Win%"),"Win",like(o,"%Lin%"),"Linux",like(o,"%Missing%"),"Others",like(o,"%So%"),"Sol",like(o,"%AIX%"),"AIX",1=1,"Others")" eval function. It works fine. Can anyone help me with the issue. Below is my code snippet. <table> <title>status</title> <search> <query>index=* sourcetype=*|fillnull value=""|eval AAA=case(like(o,"%Win%"),"Win",like(o,"%Lin%"),"Linux",like(o,"%Missing%"),"Others",like(o,"%So%"),"Sol",like(o,"%AIX%"),"AIX",1=1,"Others")|search $aaa$ | rename status as "Status"|stats count by "Status"|eventstats sum(*) as sum_* |foreach * [eval "%"=round((count/sum_count)*100,2)]|rename count as Count|fields - sum_count</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="count">10</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="totalsRow">false</option> <option name="wrap">false</option> <format type="color" field="status"> </format> <drilldown> <link target="_blank">search?q=index=*sourcetype=*|fillnull value=""|eval AAA=case(like(o,"%Win%"),"Win",like(o,"%Lin%"),"Linux",like(o,"%Missing%"),"Others",like(o,"%So%"),"Sol",like(o,"%AIX%"),"AIX",1=1,"Others")|search $aaa$ |rename status as "Status" |search "Status"="$click.value$"|stats count by status,aaa&amp;earliest=-24h@h&amp;latest=now</link> </drilldown> </table> </panel>
Is there a base_config available for Search Head Clustering configuration? If no, file approach of deploying SHC is not recommended?
Hi I was trying to create alerts from Splunk. But it was not working as expected. For example below is how the log looks like in Splunk. Hi I was trying to create email alerts from splunk log. I... See more...
Hi I was trying to create alerts from Splunk. But it was not working as expected. For example below is how the log looks like in Splunk. Hi I was trying to create email alerts from splunk log. I setup the alert and email is also triggering, but the problem is that I am not getting the mail as expected. 2020-03-18T17:04:27,335+0100 ERROR [http-nio-10.96.106.134-8084-exec-6] c.n.c.controller.ErrorController - Message = Extraction process failed. ExceptionMessage = The date 223345.990.2 cannot be parsed. The pattern is: YYYY-MM-DD. ExceptionClass = DateParseException. ExceptionId = fb04d08a-db10-40ee-97a4-d2934f5a55da host = xxxx.oneadr.net source = /xxx/logs/ninjainst/microservices/xxxx/TokenHandler/TokenHandler-app.log sourcetype = xxxx_app This is what I configured in my splunk for getting mails. "The alert condition for '$name$' was triggered on $trigger_date$, where there is a pattern ERROR in the log. ==================== ==================== Host Name: $result.host$. Source File: $result.source$. Message: $result.Message$. Exception Message: $result.ExceptionMessage$. ExceptionClass: $result.ExceptionClass$. ExceptionId: $result.ExceptionId$" But I am getting mail as below,few fields are not populating. Any idea about this The alert condition for '768_Alert' was triggered on March 18, 2020, where there is a pattern ERROR in the log. ==================== Host Name: xxxx.oneadr.net. Source File: /xxx/logs/ninjainst/microservices/xxxx/TokenHandler/TokenHandler-app.log. Message: . Exception Message: . ExceptionClass: . ExceptionId:
Hi All, I have a scripted output file that splunk is ingesting via a heavy forwarder. Since last few weeks, I am facing an issue like suddenly splunk stops ingesting the data eventhough the scr... See more...
Hi All, I have a scripted output file that splunk is ingesting via a heavy forwarder. Since last few weeks, I am facing an issue like suddenly splunk stops ingesting the data eventhough the script is writing the data to the output file. The script is configured to run every 2 minutes, and remove the previous data and write the new data onto it. When I check the internal logs, I get the below error : 03-18-2020 12:50:01.521 +0000 ERROR TailReader - Ignoring path="/tmp/splunkDataFiles/labSanityCheck.txt" due to: Bug: tried to check/configure STData processing but have no pending metadata. 03-18-2020 12:50:01.517 +0000 ERROR TailReader - failed to compute crc for /tmp/splunkDataFiles/labSanityCheck.txt (method: 0, hint: No such file or directory). As per some previous answers to this similar problem, I updated the CHARSET=AUTO, but that did not help. Can somebody suggest anything regarding this issue?????
I'm using summary index to get data and display in timechart. but not able to create a time chart with the data. index = summary_dm search_name = Instance_count | table total_Instancecount _time ... See more...
I'm using summary index to get data and display in timechart. but not able to create a time chart with the data. index = summary_dm search_name = Instance_count | table total_Instancecount _time (total_Instancecount, _time) these are the two fields summary in is created by using index = application cf_org = cf_space = cf_app = instance_index = |bucket _time span=1min| dedup cf_org cf_space cf_app instance_index | timechart span=1min count(instance_index) by cf_app| addtotals fieldname = Total_instances | fields _time Total_instances report is scheduled using above query summary index is populated with _time total_Instancecount.
I have my API's hosted on server. I want o know if we can write a splunk query to get the data from my API. // NOTE: I am trying to get data from my API hosted on web server.
am i missing something with how dbconnect works ? i have dbc on a HF and if i create a search that creates a lookup - that lookup will reside on the HF - there is no mechanism that moves it into t... See more...
am i missing something with how dbconnect works ? i have dbc on a HF and if i create a search that creates a lookup - that lookup will reside on the HF - there is no mechanism that moves it into the SHC so it can be used ? Similarly i can't run a | dbxquery from the shc - right ? gratzi
I am running my query and say total statistics returned is 3,00,000( returned 3,00,000 results ) and the search is completed. How can i show the same statistics value and search completed in my das... See more...
I am running my query and say total statistics returned is 3,00,000( returned 3,00,000 results ) and the search is completed. How can i show the same statistics value and search completed in my dashboard as a indicator?
I have a field named "Message", the content as below: *Active Directory Domain Services could not use DNS to resolve the IP address of the source domain controller listed below. To maintain the con... See more...
I have a field named "Message", the content as below: *Active Directory Domain Services could not use DNS to resolve the IP address of the source domain controller listed below. To maintain the consistency of Security groups, group policy, users and computers and their passwords, Active Directory Domain Services successfully replicated using the NetBIOS or fully qualified computer name of the source domain controller. Invalid DNS configuration may be affecting other essential operations on member computers, domain controllers or application servers in this Active Directory Domain Services forest, including logon authentication or access to network resources. You should immediately resolve this DNS configuration error so that this domain controller can resolve the IP address of the source domain controller using DNS. Alternate server name: AAABB0001 Failing DNS host name:* I want to extract value "AAABB0001", I tried to use eval n=mvfind(Message,"\w{5}\d{4}"), but it only return a value "0". Could you please give some better solution for me? thanks
Hello! So i have an alert that emails out a report of productnames, their lifecyclestatus and the PrimaryPO, SecondaryPO and TertiaryPO (Product Owner) I would like to create a token to pass to th... See more...
Hello! So i have an alert that emails out a report of productnames, their lifecyclestatus and the PrimaryPO, SecondaryPO and TertiaryPO (Product Owner) I would like to create a token to pass to the "to" field of the email message where lifecyclestatus=newproduct to the PrimaryPO it can either be a batch email to send to multiple primarypo addresses for multiple lifecyclestatus=newproduct or a single email to each seperately... Does anyone have any ideas if this is doable? I hope i have explained that clearly enough. Thanks in advance! Carly Please stay safe and healthy!
If telephone number is present in both Index 1 and Index 2 display the associated device name from the event in index 2 and then display resolution code from index 2. If anyone could point me in the ... See more...
If telephone number is present in both Index 1 and Index 2 display the associated device name from the event in index 2 and then display resolution code from index 2. If anyone could point me in the right direction I would be grateful! Splunk>Hunk Version: 7.2.9.1
I have a order data, I need to trend the order for last 15 days, plotting three values high, low and current in a same graph index=abc sourcetype=logg Ordertype= retail or online and I need ... See more...
I have a order data, I need to trend the order for last 15 days, plotting three values high, low and current in a same graph index=abc sourcetype=logg Ordertype= retail or online and I need to trend with high low and today's value in last 15 days.
I search the same time period in wineventlogs for host values with tstats (37,558) and with regular search (42,008): | tstats count where index=wineventlog sourcetype=wineventlog (TERM(LogName=Mi... See more...
I search the same time period in wineventlogs for host values with tstats (37,558) and with regular search (42,008): | tstats count where index=wineventlog sourcetype=wineventlog (TERM(LogName=Microsoft-Windows-PowerShell/Operational) OR TERM(SourceName=Microsoft-Windows-PowerShell)) (TERM(EventCode=4103) OR TERM(EventCode=4104) OR TERM(EventCode=4105) OR TERM(EventCode=4106)) by host versus: index=wineventlog sourcetype=wineventlog (TERM(LogName=Microsoft-Windows-PowerShell/Operational) OR TERM(SourceName=Microsoft-Windows-PowerShell)) (TERM(EventCode=4103) OR TERM(EventCode=4104) OR TERM(EventCode=4105) OR TERM(EventCode=4106)) | stats count by host The number of ComputerName values for the same time period is 41,656, which may be less simply due to the addition of new logs to the indexers for that time period before my searches above were run. Ironically, it took less time than my search on the indexed field "host" and I don't understand that either: 375s vs 430s, respectively.