All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I use API to create search https://[IPaddress]:8089/services/search/jobs Body: search sourcetype = ipfix | regex destinationIPv4Address = "^(12\.10\.2\.(?:[0-9]|[1-9][0-9]|1(?:[0-9][0-9])|2(?:[0-4... See more...
I use API to create search https://[IPaddress]:8089/services/search/jobs Body: search sourcetype = ipfix | regex destinationIPv4Address = "^(12\.10\.2\.(?:[0-9]|[1-9][0-9]|1(?:[0-9][0-9])|2(?:[0-4][0-9]|5[0-5])))$" | stats sum(eval(octetDeltaCount/4)) as Traffic count(Sequence) as Hint values(protocol) by sourceIPv4Address destinationIPv4Address protocolIdentifier destinationTransportPort sourceTransportPort | eval protocol=case(protocolIdentifier ==1 ,"ICfMP", protocolIdentifier ==4 ,"IP", protocolIdentifier ==6 ,"ICfMP", protocolIdentifier ==17 ,"UDP",protocolIdentifier ==88 ,"EIGRP",protocolIdentifier ==89 ,"OSPF",protocolIdentifier ==103 ,"PIM",protocolIdentifier ==143 ,"EThernet") | stats sum(eval(abs(Hint/4))) as TotalHint values(sourceIPv4Address) values(sourceTransportPort) values(destinationIPv4Address) values(destinationTransportPort) values(description) values(protocol) values(protocolIdentifier) values(Traffic) by sourceIPv4Address destinationIPv4Address protocolIdentifier | append [search sourcetype %3d ipfix | regex sourceIPv4Address = "^(12\.10\.2\.(?:[0-9]|[1-9][0-9]|1(?:[0-9][0-9])|2(?:[0-4][0-9]|5[0-5])))$" | stats sum(eval(octetDeltaCount/4)) as Traffic count(Sequence) as Hint values(protocol) by sourceIPv4Address destinationIPv4Address protocolIdentifier destinationTransportPort sourceTransportPort | eval (protocol=case(protocolIdentifier ==1 ,"ICfMP", protocolIdentifier ==4 ,"IP", protocolIdentifier ==6 ,"ICfMP", protocolIdentifier ==17 ,"UDP",protocolIdentifier ==88 ,"EIGRP",protocolIdentifier ==89 ,"OSPF",protocolIdentifier ==103 ,"PIM",protocolIdentifier ==143 ,"EThernet") | stats sum(eval(abs(Hint/4))) as TotalHint values(sourceIPv4Address) values(sourceTransportPort) values(destinationIPv4Address) values(destinationTransportPort) values(description) values(protocol) values(protocolIdentifier) values(Traffic) by sourceIPv4Address destinationIPv4Address protocolIdentifier] but when I use GET to receive result I receive 204 no content message HTTPS://[Ip address]:8089/services/search/jobs/1619301754.208/results I used  %3d  instead of = in the sourcetype but it does not work anybody faced this issue?
Hi Team, Could you please help me with below query, I have CSV file which contain correlation id's (1000+ records) in single column.  I need provide this CSV to Splunk search as input and search eac... See more...
Hi Team, Could you please help me with below query, I have CSV file which contain correlation id's (1000+ records) in single column.  I need provide this CSV to Splunk search as input and search each correlation id's  and display the results/events related to those correlation Id's.  We are using Splunk Cloud and do please let me process o how can  I can achieve this?
Hi All, We  are having multisite splunk architecture (version 8.1.0) and using LDAP for users authentication. We are not getting complete list of users using | rest /services/authentication/users. ... See more...
Hi All, We  are having multisite splunk architecture (version 8.1.0) and using LDAP for users authentication. We are not getting complete list of users using | rest /services/authentication/users. Although I checked on each SH member and able to see all users inside /opt/splunk/etc/users/ I already checked few technotes: https://community.splunk.com/t5/Security/Users-missing-from-Access-Control/m-p/487058 Wrt above technote, we don't have those two attributes (edit_roles_grantable, grantableRoles) enabled already, so the above technote is of no use for my issue. Also I did rolling restart but still users are not reflecting in rest search query. Please suggest? Thanks,
I want to monitor the splunk dashboard in my IPhone,Can I use splunk mobile in the Splunk Enterprise 8.1 ? It seems that Splunk Cloud Gateway can not work in Splunk Enterprise 8.1.
Hi All.  I want to check if there is any means by which I can set up alerts if  the current week's data is more than the avg of last 4 week's data.   I have around 25 customers hitting 3 APIs. I w... See more...
Hi All.  I want to check if there is any means by which I can set up alerts if  the current week's data is more than the avg of last 4 week's data.   I have around 25 customers hitting 3 APIs. I want to compare if first customer has hit the first API more in the current week when compared to the avg number of hits in the previous 4 week's, and then send an alert if it exceeds the avg.    Similarly for all the customers and the 3 APIs.  All the above operation should happen with a single Splunk query. I don't want to write 25*3=75 queries for the alerts.  I have written a query for 1 customer and 1 API index=nameofindex ns=namespace process="end" method!=GET earliest=-30d@d   customer1 | search API="API1" | timechart span=1w count | timewrap w series=short | eval mean=(s1+s2+s3+s4)/4 | where s0 > mean Can anyone please help here ? Any help is greatly appreciated. Thanks!
Can you see stats from the forwarder console from Linux CLI instead?  
I am working on project to compare ip and MAC  whether they are seen from three different tool. Tanium, ACAS, HBSS. Tanium fields-- ip dnsname mac  ACAS--ip dnsname mac HBSS-ip dnsname mac So whe... See more...
I am working on project to compare ip and MAC  whether they are seen from three different tool. Tanium, ACAS, HBSS. Tanium fields-- ip dnsname mac  ACAS--ip dnsname mac HBSS-ip dnsname mac So when I use join command, I was not able to include all ips from each respective tools since join only can use as inner/left/outer. Also another problem is the search takes like forever. Also only ACAS has root search with accelerated capability...   My goal is to see  like    ip                                       mac                                  dnsname       seen in ACAS  seen in Tanium seen in HBSS 123.21.323.422      22:22:22:22:22:22:22     abesdc              Yes                          No                         No   Any feedback for suggestion will be appreciated. Thank you
Hello! I am trying to group my log entries based on very specific criteria but can't seem to figure out how to do so. I have logs like this: 2021-04-23 16:47:26  User Id: 6211 Error Resolved 2021... See more...
Hello! I am trying to group my log entries based on very specific criteria but can't seem to figure out how to do so. I have logs like this: 2021-04-23 16:47:26  User Id: 6211 Error Resolved 2021-04-23 16:47:25  Error[0] type 800 2021-04-23 16:47:25  User Id: 2345 Error Resolved 2021-04-23 16:47:23  Error[0] Error Response {"user_id":2345, "error_id":9101, ..............etc} 2021-04-23 16:47:23  Error[0] type 800 2021-04-23 16:47:22  Error[0] Error Response {"user_id":6211, "error_id":9100, ..............etc} 2021-04-23 16:47:22  Error[0] type 800   I am trying to get three events in my transactions: (1) the initial error type message, (2) the error response details, and (3) the error resolved message. However, I need the error response details and the error resolved message to contain the same user id. I currently have my query set up like this: index=INDEX host=HOSTNAME sourcetype=SOURCETYPE | rex field=_raw "Error\[0\]\stype\s(?<error_code>\d+)" | rex field=_raw "User\sId:(?<user_id>\d+)\sError\sResolved" | rex field=_raw "Error\[0\]\sError\sResponse\s{\"user_id\":(?<user_id>\d+)" | where user_id<20000 or error_code=800 | transaction startswith="Error[0] type 800" endswith="User Id:"   I'm lost on how to make sure that the transaction retrieves only the events where the user id of the error response details matches the error resolved message. Any ideas?
Hi I have a request to find all users that have outcome=fail as the latest event. The outcome can be fail or success for example >>> index=foo |stats latest(outcome) by user _time Joe fail 2021-04... See more...
Hi I have a request to find all users that have outcome=fail as the latest event. The outcome can be fail or success for example >>> index=foo |stats latest(outcome) by user _time Joe fail 2021-04-23 19:57:25.998 Joe success 2021-04-23 19:40:05.888 Trying to write a query where the result is only users with a fail as last outcome, not success as last outcome. Any advice appreciated, TY!!
I have a pattern of text that appears in a log statement multiple times. Here is an example:     RandomStuff|LoginCount=5|RandomStuff|LoginCount=3|More RandomStuff|LoginCount=4|YetMoreRandomStuff... See more...
I have a pattern of text that appears in a log statement multiple times. Here is an example:     RandomStuff|LoginCount=5|RandomStuff|LoginCount=3|More RandomStuff|LoginCount=4|YetMoreRandomStuff     How can I use the rex command to get a sum of all the LoginCount values (which would total 12 in this example)? Thanks! Jonathan
we have seen as issue where Splunk UF stops reading a specific file once file gets more than 20MB , and going to batch process ( which is waiting till file to complete)  ,  My file gets Realtime data... See more...
we have seen as issue where Splunk UF stops reading a specific file once file gets more than 20MB , and going to batch process ( which is waiting till file to complete)  ,  My file gets Realtime data and reaches >1GB in 3 hours. which splunk cannot read that huge file. Please provide the config settings that i need to change for Splunk to read continuously .
Can I download the forwarder with sha1? I only see MD5 offered, but the BigFix script we have needs sha1.
I'm trying to write a field extraction on the search head using a regex . the sample data is as follows FIELDS: user,email,type,ip EVENT1: abc,abc@xyz.com,password ,127.0.0.1 EVENT2: xyz,,,127.0.... See more...
I'm trying to write a field extraction on the search head using a regex . the sample data is as follows FIELDS: user,email,type,ip EVENT1: abc,abc@xyz.com,password ,127.0.0.1 EVENT2: xyz,,,127.0.0.5 the fields are comma-delimited whether or not there are values for each fields . In second event, email and type fields have no values(user and ip fields ALWAYS have values)  Can someone assist me in a regex to handle all fields? if the field has no value (email or type) assign no values to the fields . Thanks in advance.    
Hi All, I have installed the free Splunk version. I am trying to upload lookups, but I don't seem to have that capability. I have downloaded "CVE Lookup - By Fuzzmymind.com" app, but cant use it bec... See more...
Hi All, I have installed the free Splunk version. I am trying to upload lookups, but I don't seem to have that capability. I have downloaded "CVE Lookup - By Fuzzmymind.com" app, but cant use it because its asking  me to fill the below... I don't what these fields mean. Anybody out there know?   
There are a couple of issues which often come up with the limits of mvexpand, one of these is the memory limit, the other is that it only applies to one field. I have developed a set of macros which ... See more...
There are a couple of issues which often come up with the limits of mvexpand, one of these is the memory limit, the other is that it only applies to one field. I have developed a set of macros which go some way to solving both these issues. First, I will share the single field version of the macro; then, I will try to explain what each step is doing; finally, I will share the double field version which can easily be expanded to multiple fields. Please excuse the long post.     fillnull value="$originull$" | eval steps=mvcount($field1$) | streamstats sum(steps) as toprow | eval maxrow=toprow | append [| makeresults | eval toprow=1 | fields - _time] | eventstats min(maxrow) as firsttop | where isnotnull(maxrow) or toprow != firsttop | makecontinuous toprow | reverse | filldown | reverse | eval $field1$=mvindex($field1$,(steps - 1) - (maxrow - toprow)) | fields - maxrow toprow firsttop steps | foreach * [eval <<FIELD>>=if('<<FIELD>>'="$originull$",null,'<<FIELD>>')]      So, what's going on?     fillnull value="$originull$"     The first thing to do is fill in the blanks. The macro takes a parameter for this since you would have to decide on a suitable string which doesn't clash with any data you want to keep, because this string is reset to null at the end. The reason for doing this is that filldown is used later and if there are any nulls in the original data, they will get overwritten with whatever happens to be above them.     | eval steps=mvcount($field1$)     Next, we count how many values are in the multivalue field for each event, i.e. how many events will we need in the expansion. Note that the field name (steps) is arbitrary and if it clashes with a field already in the data, you will lose that field, so change the name used in the macro. This goes for all the fields created by the macro.     | streamstats sum(steps) as toprow     Now we determine what will be the highest row number for each event.     | eval maxrow=toprow     We take a copy of this so it is kept for all the expanded rows.     | append [| makeresults | eval toprow=1 | fields - _time]     We add an additional event with toprow set to 1. This is because makecontinuous used later needs a place to start.     | eventstats min(maxrow) as firsttop     Now we find out what the lowest maxrow was. Note that the appended event we just added does not have a value for maxrow, so does not interfere with this calculation.     | where isnotnull(maxrow) or toprow != firsttop     If the original first event had only one row, its maxrow would be 1 and its toprow would also be 1 and the firsttop would also be 1, so, in this instance, we want to remove the event we just added.     | makecontinuous toprow     Now for the magic - we create events using toprow (which is guaranteed to start at 1) and end with the total number of rows required. Note that at this point, you could run into the rows limit, and you will need to modify your configuration to get around this.     | reverse     Reverse the order of events (they will have been put into toprow order by the makecontinuous). This is so that filldown copies the right values into the extra events.     | filldown     Copy all the fields from the events into the new events - you may hit a memory issue here as this is where most memory is taken up because all the multivalue fields are copied into their respective events.     | reverse     Return the events to original order (possibly not strictly necessary).     | eval $field1$=mvindex($field1$,(steps - 1) - (maxrow - toprow))     Select one of the multi-values.     | fields - maxrow toprow firsttop steps     Remove all the extra fields.     | foreach * [eval <<FIELD>>=if('<<FIELD>>'="$originull$",null,'<<FIELD>>')]     Restore the nulls. For the multi-mvexpand, steps is evaluated to the maximum number of multivalue fields, and each field has the corresponding entry extracted in the expanded event. This can be extended by adding more fields.     fillnull value="$originull$" | eval steps=max(mvcount($field1$),mvcount($field2$)) | streamstats sum(steps) as toprow | eval maxrow=toprow | append [| makeresults | eval toprow=1 | fields - _time] | eventstats min(maxrow) as firsttop | where isnotnull(maxrow) or toprow != firsttop | makecontinuous toprow | reverse | filldown | reverse | eval $field1$=mvindex($field1$,(steps - 1) - (maxrow - toprow)) | eval $field2$=mvindex($field2$,(steps - 1) - (maxrow - toprow)) | fields - maxrow toprow firsttop steps | foreach * [eval <<FIELD>>=if('<<FIELD>>'="$originull$",null,'<<FIELD>>')]      
Hey There,  I have two lookups, both have same exact fields. I need all the fields from Lookup1.csv, which I have no problem getting and only one filed from Lookup2.csv. The only field I need from... See more...
Hey There,  I have two lookups, both have same exact fields. I need all the fields from Lookup1.csv, which I have no problem getting and only one filed from Lookup2.csv. The only field I need from Lookup2.csv is Lookup_Demand_X, which I created an eval for - gave it the name Demand_X, since lookups use same field names I wanted to differentiate. The issue I have is that I only need one column from Lookup2.csv - (Demand_X) and the rest of the data only pertaining to Lookup1.csv.  When I bring in the column for Demand_X from Lookup1.csv I consequently bring the all the data for Lookup1. csv.    How can I only keep the data from Lookup1.csv and only bring in just one field from Lookup2.csv (Demand_X)?    | inputlookup Lookup1.csv | inputlookup append=t Lookup2.csv | eval Demand_X=case(lookup_Demand_Team="Red", lookup_Demand_X), Demand_Y=case(lookup_Demand_Team="Blue",lookup_Demand_Y) | stats values(Demand_X) as Demand_X, values(Demand_Y) as Demand_Y values(lookup_Demand_Team) as lookup_Demand_Team, values(lookup_Grouping) as lookup_Grouping, values(lookup_desc) as lookup_desc, values(lookup_Sub) as lookup_Sub values(lookup_Service) as lookup_Service values(lookup_Type) as lookup_Type count(lookup_Count) as lookup_Count by lookup_Z_Field | where isNotNull(Demand_Y)  
I'm trying to track Ringcentral data that we have in Splunk now and the objective is to sort and alert us to missed calls. However the data isn't organized the way I'm used to. This is. a snippet of ... See more...
I'm trying to track Ringcentral data that we have in Splunk now and the objective is to sort and alert us to missed calls. However the data isn't organized the way I'm used to. This is. a snippet of the event. What I want to do is utilize the rex field to sort by result. These are some of the different results we have.  If you need any information please let me know "result" : "Missed" "result" : "Accepted" "result" : "Call Connected     "duration":39,"type":"Voice","direction":"Inbound","action":"Phone Call","result":"Missed"      
What is the latest stable release of splunk 8.x?  We are planning a version upgrade from 7.3.5 to 8.x.  I have heard of some instances where stability issues were seen on 8.x.  Does anyone know of th... See more...
What is the latest stable release of splunk 8.x?  We are planning a version upgrade from 7.3.5 to 8.x.  I have heard of some instances where stability issues were seen on 8.x.  Does anyone know of the latest stable version?
Hello. Good afternoon. Currently, we are ingesting Windows Events using Splunk_TA_windows. Within the inputs.conf file, we have a number of Event Codes blacklisted. Now, we are looking to create a s... See more...
Hello. Good afternoon. Currently, we are ingesting Windows Events using Splunk_TA_windows. Within the inputs.conf file, we have a number of Event Codes blacklisted. Now, we are looking to create a second TA (based on Splunk_TA_windows) where two Event Codes will be removed from the blacklist. What would be the best method for setting the sourcetype for [WinEventLog://Security]?  Our input looks like this ... [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 60 index = us_endpoint_microsoft_windows_security renderXml=false blacklist3=4820,4770,4773,4784,4789,4791,4792,4793,4764,4780,5376,5377,4797,4696,5712,4928,4929,4930,4931,4934,4935,4936,4937,4661,4662,5138,5139,4932,4933,4978,4979,4980,4981,4982,4983,4984,4650,4651,4652,4653,4655,4976,5049,5453,4654,4977,5451,5452,4626,4675,6272,6274,6275,6276,6277,6278,6279,6280,5378,5632,5633,4666,4667,4868,4869,4871,4872,4873,4874,4875,4876,4877,4878,4879,4880,4881,4883,4884,4886,4887,4889,4893,4894,4895,4896,4898,5120,5168,4658,4659,5031,5150,5151,5155,5157,5158,5159,5152,5153,4656,4658,4690,4671,4659,5149,5888,5889,5890,4659,4663,5039,4661,4818,4715,4817,4902,4904,4906,4713,4718,4864,4911,4913,4705,4714,4709,4710,4711,4712,5040,5041,5042,5043,5044,5045,5046,5047,5048,5440,5441,5442,5443,5444,5446,5448,5449,5450,5456,5457,5458,5459,5460,5461,5462,5463,5464,5465,5466,5467,5468,5471,5472,5473,5474,5477,4944,4945,4951,4952,4953,4954,4956,4957,4958,4819,4909,4910,5063,5064,5065,5066,5067,5068,5069,5070,5447,6144,6145,4674,4673,4674,4960,4961,4962,4963,4965,5478,5479,5480,5483,5484,5485,4621,4610,4622,4816,5038,5056,5057,5060,5061,5062,6281,5024,5027,5028,5029,5030,5032,5033,5035,5037,5058,5059,6400,6401,6402,6403,6404,6405,6406,6407,6408,4821,4822,4822,4823,4824,5071,5146,5147,6409,1105,5121,5122,5050,5123,5125,5126,5127,5156 And our sourcetype looks like this ... [WinEventLog:Security] FIELDALIAS-action = Accesses AS action REPORT-change_type = report_change_type FIELDALIAS-dest = ComputerName AS dest FIELDALIAS-dvc = host AS dvc REPORT-object = report_object FIELDALIAS-object_path = Object_Name AS object_path EVAL-status = case(Keywords="Audit Success","success",Keywords="Audit Failure","failure") FIELDALIAS-user = Account_Name AS user EVAL-vendor = "Microsoft" EVAL-vendor_product = "Windows" Any suggestions would be appreciated.  Thanks. Regards, Max  
Hello. Good afternoon.  We are receiving a successful bind error when trying to authenticate using SA-LDAPSearch.  Below is a screenshot of the error message: Any suggestions on how to resolve ... See more...
Hello. Good afternoon.  We are receiving a successful bind error when trying to authenticate using SA-LDAPSearch.  Below is a screenshot of the error message: Any suggestions on how to resolve this? Regards, Max