All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, Requirement : ES incident/Alerts  should be mark as True Positive or False Positive as verdict . Please help how I can fulfill this requirement,. Is there any custom field configuration o... See more...
Hi Team, Requirement : ES incident/Alerts  should be mark as True Positive or False Positive as verdict . Please help how I can fulfill this requirement,. Is there any custom field configuration or any drop down list can be configured ?
Hello all, It's my second day with a Splunk and I cant understand a splunk logic. I created a alert search. It works fine. As a search result I have a table: IP address   username broken_rule   cou... See more...
Hello all, It's my second day with a Splunk and I cant understand a splunk logic. I created a alert search. It works fine. As a search result I have a table: IP address   username broken_rule   count_of_broken_rules 192.168. ...    aaa             rule_name       75 192.168...      bbb             rule_name       74 199.188...      ccc              rule_name       20 How can You see in picture, I configured alert to send an email when count of broken rules is more than 60.  It must to send an email every hour. If I choose an option "Once" I am getting an email only with a one (first) record. But I want to get an email with a second record too. If i choose an option "for each result" I am getting an email for all records. It doesnt matter, that the the third record does not meet the requirement> 60. I want to get one email with the two record (from a example with first and second records). What I am doing wrong?    
Hi All, I am new to Machine Learning Toolkit. I have a use case to implement to predict volume of sales depending on days of weeks/ hourly/towards end or beginning of months and on special holidays ... See more...
Hi All, I am new to Machine Learning Toolkit. I have a use case to implement to predict volume of sales depending on days of weeks/ hourly/towards end or beginning of months and on special holidays days.  Apart from predicting the volume of sales, I would also like to generate thresholds as upper bound and lower bound and alert when crossing the upper bound or lower bound Which of the ML algorithm is best suitable for this kind of use case? Raw date: A lookup storing the sales in every 30mins in last one year. Time Sales 23/03/2021 01:30 195 23/03/2021 02:00 570 23/03/2021 02:30 3580 23/03/2021 03:00 1350 23/03/2021 03:30 103 23/03/2021 04:00 245
I have a two saved searches A and B. Each gives an output like below: A:                              host host1 host2 host 3 B: host host 2 host 3 host 4 I'd like to execute search that u... See more...
I have a two saved searches A and B. Each gives an output like below: A:                              host host1 host2 host 3 B: host host 2 host 3 host 4 I'd like to execute search that uses results of both saved searches to perform set subtraction: A - B. So in this example I should get host1 as an result. The number of hosts for A and B can be greater than 10000 so I'd like to avoid using subsearch command as my output could be truncated.
hello In the stats command below, i try to retrieve the _time values (which is the Splunk timestamp) corresponding to the "Resolver group" column I succeed to do this replacing the "by ticket_id" c... See more...
hello In the stats command below, i try to retrieve the _time values (which is the Splunk timestamp) corresponding to the "Resolver group" column I succeed to do this replacing the "by ticket_id" clause by an "assignment_group_name" clause but I need to keel my "by ticket_id" clause | stats values(assignment_group_name) as "Resolver group", dc(assignment_group_name) as "Number of assignment group" by ticket_id  I tried something like this, but I have just one timestamp | stats latest(_time) as _time, values(assignment_group_name) as "Resolver group", dc(assignment_group_name) as "Number of assignment group" by ticket_id Could you help please?
Hi All I ran this query and getting all required output. But I want to add more like Node IP & SP IP also in the report.   `netapp_index` sourcetype=ontap:system (source="system-node-get-iter" OR ... See more...
Hi All I ran this query and getting all required output. But I want to add more like Node IP & SP IP also in the report.   `netapp_index` sourcetype=ontap:system (source="system-node-get-iter" OR "system-get-node-info-iter") | dedup node | sort node | table host, node, is-node-healthy, node-serial-number, node-model, product-version, node-location, | rename host AS "Cluster Name", node AS "Node Name", is-node-healthy AS "Health Status", node-serial-number AS "Node Serial Number", node-model AS "NetApp Model", product-version AS "ONTAP Version", node-location AS "Site", prod-type AS "Product Type" what parameter I need to use to get "Node IP & SP IP" in splunk?   Thanks & Regards Raghu  
Hi Everyone I am trying to pull snapmirror information on Splunk and I am getting limited information like error etc., But I like to pull for example "snapmirror lag time". So please any one help h... See more...
Hi Everyone I am trying to pull snapmirror information on Splunk and I am getting limited information like error etc., But I like to pull for example "snapmirror lag time". So please any one help how to get this?   Thanks & Regards Raghu  
Hi, I am using 2 indexes (index1 and index2). I want to pull a field from index1 (URL and rename it to url_1), and the in a subsearch I want to pull more fields from index 2. At the end I want a tab... See more...
Hi, I am using 2 indexes (index1 and index2). I want to pull a field from index1 (URL and rename it to url_1), and the in a subsearch I want to pull more fields from index 2. At the end I want a table with the field from index1(url_1) and the fields from index2. 
i'm trying to extract data from json and show into my dashboard but failed     { "timestamp":"2021-04-22T09:14:38.727Z", "message":"Metrics: key1=false, [SystemMetricsBean] key2=key2val, [Metr... See more...
i'm trying to extract data from json and show into my dashboard but failed     { "timestamp":"2021-04-22T09:14:38.727Z", "message":"Metrics: key1=false, [SystemMetricsBean] key2=key2val, [MetricAttributes] sumCountViaMetricsAnnotation=2, failureCount=0, sumDuration=46, minDuration=22, maxDuration=24, sumCountViaCacheAnnotation=2, numWithoutCache=2, numDisableCache=2", "version":"1.1.0" }     i'd like to  extract failureCount and other statistic data then display in my dashboard   here is my search but not work:     base search | spath path=message,output=metrics | stats count(sumDuration) as duration, count(failureCount) as fail       can u help to guide me? also i try other cmd like extract, eval, rex but also not got the result
I am very new to splunk, We are trying to monitor our hyperledger fabric network with the Splunk App for fabric in the splunk enterprise. We have a hyperledger fabric network with version 2.2.2. I in... See more...
I am very new to splunk, We are trying to monitor our hyperledger fabric network with the Splunk App for fabric in the splunk enterprise. We have a hyperledger fabric network with version 2.2.2. I installed the app and followed the instructions specified in https://splunkbase.splunk.com/app/4605/#/details . I also setup the fabric-logger and i could see the fabric-logger is running and it is able to fetch the blocks and event details from the peer from which it is connected to. In the splunk enterprise UI, I got below message. "Search peer indexer-0 has the following message: Received event for unconfigured/disabled/deleted index=hyperledger_logs with source="source::fabriclogger" host="host::fabric-logger-6b79d77b99-bncwj" sourcetype="sourcetype::fabric_logger:endorser_transaction". So far received events from 1 missing index(es).". I have the HEC enabled and i also have the index hyperledger_logs. I don´t see any errors in the logs of fabric-logger or in the indexer. But I am not seeing any data also in the splunk. Please find the screenshot below    
  When I am trying to validate the connection  the validation is taking time and finally popping out a message saying   The TCP/IP connection to the host xxxxxx, port 1433 has failed. Error: "conne... See more...
  When I am trying to validate the connection  the validation is taking time and finally popping out a message saying   The TCP/IP connection to the host xxxxxx, port 1433 has failed. Error: "connect timed out. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.   Can anyone help me on this!   
Hello community, I tried to find an answer to my problem, but it seems im incapable of finding it, so i will be posting it here :). First, my search is based on the Windows Event Id 4663 (https://do... See more...
Hello community, I tried to find an answer to my problem, but it seems im incapable of finding it, so i will be posting it here :). First, my search is based on the Windows Event Id 4663 (https://docs.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4663) and i am trying to do the following: 1. find all the ObjectName values with AccessMask IN(0x2,0x4,0x6) that have EXE, DLL, SYS or OCX extensions 2. from 1. take the corresponding ProcessName (which created the ObjectName) and import it in a new search 3. the new search has to replace the values of ProcessName with ObjectName in 4663 and re-iterate the 1. searches. 4. output in a table the time of/and ObjectName (1.) creation and both the process creators (for ObjectName and the ProcessName from 2.) I know, it is a little messy, but what i am trying to find is a malware Dropper. A freshly written executable (usually) that would further write other binaries. Here are my two attempts at this: 5. with join, which surprisingly is faster index=* earliest=-3h latest=now sourcetype=xmlwineventlog EventCode=4663 AccessMask IN(0x2,0x4,0x6) (ObjectName=*\.cab OR ObjectName=*\.dll OR ObjectName=*\.exe OR ObjectName=*\.ocx OR ObjectName=*\.sys OR ObjectName=*\.bat)|rename ObjectName AS PayloadCreated ProcessName AS Dropper|join Dropper [search index=64388 earliest=-1d latest=now sourcetype=XmlWinEventLog EventCode=4663 AccessMask IN(0x2,0x4,0x6) (ObjectName=*\.dll OR ObjectName=*\.exe OR ObjectName=*\.ocx OR ObjectName=*\.sys)|rename ObjectName AS Dropper]|table _time Computer SubjectUserName SubjectLogonId SubjectUserSid ProcessName ProcessId Dropper PayloadCreated 6. with map, which due to the large number of results for ObjectName in search 1. is reaaaallly slow (obligated to stop and delete the job after 5 min) and gives some duplicates (except for _time) index=* sourcetype=XmlWinEventLog EventCode=4663 AccessMask IN(0x2,0x4,0x6) (ObjectName=*\.cab OR ObjectName=*\.dll OR ObjectName=*\.exe OR ObjectName=*\.ocx OR ObjectName=*\.sys OR ObjectName=*\.bat OR ObjectName=*\.dat OR ObjectName=*\.pdb OR ObjectName=*\.sdb)|rename ObjectName AS PayloadCreated ProcessName AS Dropper|map maxsearches=999 search="search index=* sourcetype=XmlWinEventLog EventCode=4663 AccessMask IN(0x2,0x4,0x6) ObjectName=$Dropper$|eval PayloadCreated=$PayloadCreated$, Dropper=$Dropper$"|table _time Computer SubjectUserName SubjectLogonId SubjectUserSid ProcessName ProcessId Dropper PayloadCreated   Is there any function or workaround for this?   Thank you all.
Hi,   I  am trying to add around 400 entities from my entity list in Splunk ITSI. After selecting the entities I can add them in maintenance mode with bulk action. Problem here is I cannot select t... See more...
Hi,   I  am trying to add around 400 entities from my entity list in Splunk ITSI. After selecting the entities I can add them in maintenance mode with bulk action. Problem here is I cannot select the entities at bulk in the filter , have to enter the entity name one by one and select the entity from dropdown.  Is there any procedure to select them in bulk at once and then enable maintenance mode for them. Or can we just modify any configuration file and add these entities in backend. Please help me out.
Post followed up, module 4 lab "Splunk Fundamentals 1 Lab Exercises" --ingesting data, i am not getting any number of events indexed on data summary, please help. Due to this my training became halted.
trying to do something like: index=someindex action=someaction | where city_id in ([search dbxquery query="select city_id from Cities  where Country="USA" connection="SQLserver" ]) The dbxquerry wi... See more...
trying to do something like: index=someindex action=someaction | where city_id in ([search dbxquery query="select city_id from Cities  where Country="USA" connection="SQLserver" ]) The dbxquerry will return 1 or more results obviously, this is a malformed syntax. Any idea how I pull this off?   Thanks.  
Hi, I am using Splunk Addon for Microsoft cloud services add on to integrate splunk with MS Azure. I want to ingest event hub data to Splunk and for that i have done all the prerequisite we needed ... See more...
Hi, I am using Splunk Addon for Microsoft cloud services add on to integrate splunk with MS Azure. I want to ingest event hub data to Splunk and for that i have done all the prerequisite we needed like app registration on azure side and configured azure app account in Splunk Addon for Microsoft cloud services on Splunk. Then i configured the Azure event hub input in splunk add on. But i am not able to get event hub data on splunk. But i configured Azure Audit logs input on splunk add on  then i am able to get audit logs on splunk. Can anyone please help me what can be issue?
Hello everyone, I am getting event data inside my splunk.  I want to query data ( logins by country) on splunk search, I am using following search string : index = onelogin eventtype = onelogin_eve... See more...
Hello everyone, I am getting event data inside my splunk.  I want to query data ( logins by country) on splunk search, I am using following search string : index = onelogin eventtype = onelogin_event_user_logged_into_onelogin Country="United States" | rename ipaddr AS IP_ADDR | iplocation IP_ADDR | dedup id but it is not returning me any results. Why it is so?    
Hi I have this graph and the item DETRACTOR appears: NULL I think due to the by which has 2 parameters. I would like not to display the: NULL for Detractors, is there a way to remove it from the dis... See more...
Hi I have this graph and the item DETRACTOR appears: NULL I think due to the by which has 2 parameters. I would like not to display the: NULL for Detractors, is there a way to remove it from the display? Tks This is the code. index=......... |table Mese,MOTIVO_KO,PERCENTUALE,DETRACTOR |chart values(PERCENTUALE) as %, values(DETRACTOR) as DETRACTOR by Mese, MOTIVO_KO
Hello, I had just signed up for phantom - community edition and was wondering how long the request takes to be approved?   regards, M
Hi guys, This is a bit embarrassing but I am all hands tied trying to figuring out how to compute the following results:   I have to count how many times a client has successfully change his/her a... See more...
Hi guys, This is a bit embarrassing but I am all hands tied trying to figuring out how to compute the following results:   I have to count how many times a client has successfully change his/her account "nickname"  in other to do so... the client has to go to transaction "change_id_page_nick" and received the approval tag "04X" and after that the client is taken to the transition "confirm_token_change" and get the approval tag "051"  any other scenario results in an unsuccessful attempt  to change the nickname and should be recorded nonetheless. lets suppose that the clients in the first stage ("change_id_page_nick")  look like this: ID_CLIENT TAG Gabby 04X Gabby 08x Alex 04x Nicole 04X   and for the second stage ("confirm_token_change") it can look like this: ID_CLIENT TAG Gabby 051 Gabby 04P Alex 051   What I want to achieve is this table: ID_CLIENT TAG_1 TAG_2 SUCCESFULL? Gabby 04X 051 YES Gabby 08X 04P NO Alex 04x 051 YES Nicole 04X N.A NO   this table allows me to see every attempt made by all the clients also it allows me to see which clients did not complete the process as we see that Nicole has a N.A in TAG_2 meaning that she did not proceed with the process. I come from the world of SQL so I thought about doing my table joins but splunk does not work like that and I will be so happy if you guys can show me how to each the table above. The results from the first stage are obtained through the query   index="page_upload_info" |search="change_id_page_nick" |fields ID approval_tag    The second stage's data   index="page_upload_info" |search="confirm_token_change" |fields ID approval_tag   I dont know if the best thing to do is doing a multisearch first or a join in order to get the table above kindly, Cindy