All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

OK, so you don't have any correlation in the lookup to match against the event... So, If you have a field 'Subject' containing the string "File system alert on ..." then you can get the system nam... See more...
OK, so you don't have any correlation in the lookup to match against the event... So, If you have a field 'Subject' containing the string "File system alert on ..." then you can get the system name from that like this | rex field=Subject "File system alert on (?<system>.*)" which will work for AAA and BBB, but I am not sure how you would map 'server serveraaaname' to AAA in your example - what is the rule for that mapping?  
Thanks @bowesmana , for looking into this. Good point that Success/failed message relates to specific id or not , that is why I am trying to map to the time stamp of success/fail to that processed i... See more...
Thanks @bowesmana , for looking into this. Good point that Success/failed message relates to specific id or not , that is why I am trying to map to the time stamp of success/fail to that processed id and as well as filter to a host to compare. the actual log info is  http-nio-8080-exec-6 nteg 2023-11-14T10:30:30,062 INFO REQEST XML http-nio-8080-exec-6 nteg 2023-11-14T10:30:30,062 INFO Operation started http-nio-8080-exec-6 nteg 2023-11-14T10:30:30,112 ERROR Operation error .WsdlFault: Failed to process CALL STACk http-nio-8080-exec-6 nteg 2023-11-14T10:30:30,118 INFO Operation failed http-nio-8080-exec-6 nteg 2023-11-14T10:30:30,118 INFO request processed http-nio-8080-exec-6 nteg 2023-11-14T10:30:30,118 ERROR exception thrown regarding {ABCDEFGH-IJKL} http-nio-8080-exec-6 nteg 2023-11-14T10:30:30,118 ERROR exception thrown regarding {ABCDEFGH-IJKL} Thanks  
I don't see any new line character. I have attached a snippet of the event. Please let me know how can I send event file (.json file). json is not supported attachment here.  
I use Splunk UBA 5.3.0 when I try to add data source with splunk direct, raw events it will be error "There was an error processing your request. It has been logged (ID ...)" How to fix it? Splunk ... See more...
I use Splunk UBA 5.3.0 when I try to add data source with splunk direct, raw events it will be error "There was an error processing your request. It has been logged (ID ...)" How to fix it? Splunk Enterprise I use 9.0.0 (Splunk Enterprise and Splunk UBA are fresh install) Thanks for help.
How do you know that the success/failed message relates to a specific id? In your example, the status comes before the message id event. What you have more than one event id coming and they are out ... See more...
How do you know that the success/failed message relates to a specific id? In your example, the status comes before the message id event. What you have more than one event id coming and they are out of sync?
If you have a known max limit of keys, then you can do it without the mvexpand, which if you have a large dataset, can hit memory issues. | makeresults | eval field_id="/key1/value1/key2/value2/key3... See more...
If you have a known max limit of keys, then you can do it without the mvexpand, which if you have a large dataset, can hit memory issues. | makeresults | eval field_id="/key1/value1/key2/value2/key3/value3/key4/value4" | rex field=field_id max_match=0 "/(?<k>[^/]*)/(?<v>[^/]*)" | foreach 0 1 2 3 4 5 6 7 8 9 10[ eval _k=mvindex(k, <<FIELD>>), {_k}=mvindex(v, <<FIELD>>) ] Just put in the foreach statement the maximum number of possible key/value pairs you have.
Hello, You are correct, the alert name is in the data. It is under a single field called "Subject" in a form of a string.  But the data in the lookup table is like this, with a single  field " Syst... See more...
Hello, You are correct, the alert name is in the data. It is under a single field called "Subject" in a form of a string.  But the data in the lookup table is like this, with a single  field " System_name",example: AAA BBB CCC DDD The main data is has just a single field as well called "Subject" ( each row a string): File system alert on AAA File system alert on server serveraaaname File system alert on  BBB   I just want the output to be like in 2 fields: Subject || system_name Fils system alert on AAA || AAA File system alert on serveraaaname || AAA File system alert on BBB || BBB   Hopefully this makes sense.  
Hi ,  I am trying to find the list of ids that fail from my logs.  Say I have  2023-11-14T10:30:30,118 INFO Operation failed ..... ...... 2023-11-14T10:30:40,118 INFO Operation ID ABCD .......... See more...
Hi ,  I am trying to find the list of ids that fail from my logs.  Say I have  2023-11-14T10:30:30,118 INFO Operation failed ..... ...... 2023-11-14T10:30:40,118 INFO Operation ID ABCD ............. 2023-11-14T10:35:25,118 INFO Operation success ..... ...... 2023-11-14T10:35:30,118 INFO Operation id 1234 ''''''   I am trying to get the information as Time stamp Status ID 2023-11-14T10:30:30 failed ABCD 2023-11-14T10:30:30 Success 1234   I appreciate any help  Thanks   
I know I tried latest(...) but like you mentioned I removed anAction from the split and am now seeing only the latest action for each user with no duplicated user ID in the results. Thanks!!!
Depending on how many cases you have, you can either do it inline, with | eval description=case(match(Message, "regex_expression1"), "Description1", match(Message, "regex_exp... See more...
Depending on how many cases you have, you can either do it inline, with | eval description=case(match(Message, "regex_expression1"), "Description1", match(Message, "regex_expression2"), "Description2", match(Message, "regex_expression3"), "Description3") or probably more practical is to make a lookup - probably a wildcard based lookup, which means creating a CSV with Message, Description fields and then a lookup DEFINITION that has the match type set to  WILDCARD(Message) In that you could then put things like "DNS name resolution failure*" as the Message column and then a suitable description. Using a wildcard type means you don't have to write SPL to extract particular bits of the message to determine the lookup attribute.
Do you mean where Jname and Sname are the same AND the saber_colour + strengths are the same or something else? This will find you all the cases where the same name has the same combination of saber... See more...
Do you mean where Jname and Sname are the same AND the saber_colour + strengths are the same or something else? This will find you all the cases where the same name has the same combination of saber_color and strengths index=jedi OR index=sith | eval name=coalesce(Jname, Sname) | stats values(name) as names by saber_color strengths | where mvcount(names)=1 and to find where Jname!=SName, change the mvcount to equal 2. Good caveat not using join - you should always avoid join and it's almost never the right solution! 
I want to add a command to my add on, with the aim of passing the splunk spl query results to that command, and then processing it to return the data to splunk's statistical information. there is my... See more...
I want to add a command to my add on, with the aim of passing the splunk spl query results to that command, and then processing it to return the data to splunk's statistical information. there is my spl command:index="test" | stats count by asset | eval to_query=asset | fields to_query | compromiseBut the processing of requests in my command is synchronous, which consumes a lot of time def stream(self, records):     for record in records:         logger.info(records)         to_query = record.get("to_query")         data = self.ti_compromise(to_query)         logger.info(data)         if data:             res = deepcopy(record)             if data[to_query]:                 for ioc in data[to_query]:                     if not ioc["ioc"][2]:                         ioc["ioc"][2] = " "                     res.update({PREFIX + key: value for key, value in ioc.items()})                     yield res             else:                 res.update(EMPTY_RTN)                 yield res     The method of "self.ti_compromise(to_query)" is to request other interfaces.   Can I modify the above method to concurrent processing on Splunk? If possible, which plan would be better。 Also, can the statistical information of Splunk receive list types, such as:   [ { "alert_name": "aaaaaaaaaaaa", "campaign": "", "confidence": "", "current_status": "", }, { "alert_name": "bbbbbbbbbbbb", "campaign": "", "confidence": "", "current_status": "", } ]        
Do you want to see the latest action by host AND login id or just the last action by login id? Anyway, the way to do this is by doing | stats max(_time) AS lastAttempt latest(anAction) as lastActio... See more...
Do you want to see the latest action by host AND login id or just the last action by login id? Anyway, the way to do this is by doing | stats max(_time) AS lastAttempt latest(anAction) as lastAction BY host aLoginID rather than putting action into the split by.
Hello @richgalloway, What is the difference between KV_Mode=auto and KV_Mode=none. Add-on builder is not supported in splunk cloud,how we can build the app in windows? Thanks..
Just use the lookup as a lookup - that's what it's intended for It's a little unclear what exists in the alert and what exists in the lookup based on this statement If systemname is found in the lo... See more...
Just use the lookup as a lookup - that's what it's intended for It's a little unclear what exists in the alert and what exists in the lookup based on this statement If systemname is found in the lookup table that matches on what is found in the alert, output systemname so I'm assuming you have an Alert Name in your data, so just so | lookup your_lookup_file.csv "Alert Name" OUTPUT "System Name" Assuming those are the names of your fields in the data/lookup (Alert Name) and the name of the field in the lookup is "System Name"
Wow thankyou for such a quick response. What is the maximum for Hot > Cold. The data size is negligible 17 mb for 3 months so no issues with disk size. Comparable to the security logs its a drop in t... See more...
Wow thankyou for such a quick response. What is the maximum for Hot > Cold. The data size is negligible 17 mb for 3 months so no issues with disk size. Comparable to the security logs its a drop in the ocean.
There is no "forever" setting for index retention.  You can set a very long retention time (10 years or more) and a large size (make sure the disk is big enough for all that data) and Splunk will kee... See more...
There is no "forever" setting for index retention.  You can set a very long retention time (10 years or more) and a large size (make sure the disk is big enough for all that data) and Splunk will keep the data long enough (probably until something forces you to reload the CMDB data).
We use splunk for data analysing and monitoring. We have the Service Now add in to collect CMDB data. It goes back and collects all the data then only collects new info on changes.  Therefore if we h... See more...
We use splunk for data analysing and monitoring. We have the Service Now add in to collect CMDB data. It goes back and collects all the data then only collects new info on changes.  Therefore if we have any logs at any point being set from hot/cold to cold/frozen it will remove the data points we require. The add-on is not setup to grab all the data again. This means we cannot lose any of that data otherwise the results wil be incomplete. I would like to make it so that the data never goes from hot/cold cold/frozen or have some input on how we can best make this scenario work. 
Thank you. I modified it a little and worked. 
In general, all subsearches have limitations, but most SQL people come to Splunk and think that join is the way to go, which is not the case. The first choice should always be to NOT use join. Appen... See more...
In general, all subsearches have limitations, but most SQL people come to Splunk and think that join is the way to go, which is not the case. The first choice should always be to NOT use join. Append will also have limits on the number of results - there are plenty of discussions on the topic and Splunk has documentation on these limits. So, really, if your data size is large, you need to be aware of these limits, but also from a performance point of view, join is not the best way to go and searches using join will impact on other users on the search head.