All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you have a known max limit of keys, then you can do it without the mvexpand, which if you have a large dataset, can hit memory issues. | makeresults | eval field_id="/key1/value1/key2/value2/key3... See more...
If you have a known max limit of keys, then you can do it without the mvexpand, which if you have a large dataset, can hit memory issues. | makeresults | eval field_id="/key1/value1/key2/value2/key3/value3/key4/value4" | rex field=field_id max_match=0 "/(?<k>[^/]*)/(?<v>[^/]*)" | foreach 0 1 2 3 4 5 6 7 8 9 10[ eval _k=mvindex(k, <<FIELD>>), {_k}=mvindex(v, <<FIELD>>) ] Just put in the foreach statement the maximum number of possible key/value pairs you have.
Hello, You are correct, the alert name is in the data. It is under a single field called "Subject" in a form of a string.  But the data in the lookup table is like this, with a single  field " Syst... See more...
Hello, You are correct, the alert name is in the data. It is under a single field called "Subject" in a form of a string.  But the data in the lookup table is like this, with a single  field " System_name",example: AAA BBB CCC DDD The main data is has just a single field as well called "Subject" ( each row a string): File system alert on AAA File system alert on server serveraaaname File system alert on  BBB   I just want the output to be like in 2 fields: Subject || system_name Fils system alert on AAA || AAA File system alert on serveraaaname || AAA File system alert on BBB || BBB   Hopefully this makes sense.  
Hi ,  I am trying to find the list of ids that fail from my logs.  Say I have  2023-11-14T10:30:30,118 INFO Operation failed ..... ...... 2023-11-14T10:30:40,118 INFO Operation ID ABCD .......... See more...
Hi ,  I am trying to find the list of ids that fail from my logs.  Say I have  2023-11-14T10:30:30,118 INFO Operation failed ..... ...... 2023-11-14T10:30:40,118 INFO Operation ID ABCD ............. 2023-11-14T10:35:25,118 INFO Operation success ..... ...... 2023-11-14T10:35:30,118 INFO Operation id 1234 ''''''   I am trying to get the information as Time stamp Status ID 2023-11-14T10:30:30 failed ABCD 2023-11-14T10:30:30 Success 1234   I appreciate any help  Thanks   
I know I tried latest(...) but like you mentioned I removed anAction from the split and am now seeing only the latest action for each user with no duplicated user ID in the results. Thanks!!!
Depending on how many cases you have, you can either do it inline, with | eval description=case(match(Message, "regex_expression1"), "Description1", match(Message, "regex_exp... See more...
Depending on how many cases you have, you can either do it inline, with | eval description=case(match(Message, "regex_expression1"), "Description1", match(Message, "regex_expression2"), "Description2", match(Message, "regex_expression3"), "Description3") or probably more practical is to make a lookup - probably a wildcard based lookup, which means creating a CSV with Message, Description fields and then a lookup DEFINITION that has the match type set to  WILDCARD(Message) In that you could then put things like "DNS name resolution failure*" as the Message column and then a suitable description. Using a wildcard type means you don't have to write SPL to extract particular bits of the message to determine the lookup attribute.
Do you mean where Jname and Sname are the same AND the saber_colour + strengths are the same or something else? This will find you all the cases where the same name has the same combination of saber... See more...
Do you mean where Jname and Sname are the same AND the saber_colour + strengths are the same or something else? This will find you all the cases where the same name has the same combination of saber_color and strengths index=jedi OR index=sith | eval name=coalesce(Jname, Sname) | stats values(name) as names by saber_color strengths | where mvcount(names)=1 and to find where Jname!=SName, change the mvcount to equal 2. Good caveat not using join - you should always avoid join and it's almost never the right solution! 
I want to add a command to my add on, with the aim of passing the splunk spl query results to that command, and then processing it to return the data to splunk's statistical information. there is my... See more...
I want to add a command to my add on, with the aim of passing the splunk spl query results to that command, and then processing it to return the data to splunk's statistical information. there is my spl command:index="test" | stats count by asset | eval to_query=asset | fields to_query | compromiseBut the processing of requests in my command is synchronous, which consumes a lot of time def stream(self, records):     for record in records:         logger.info(records)         to_query = record.get("to_query")         data = self.ti_compromise(to_query)         logger.info(data)         if data:             res = deepcopy(record)             if data[to_query]:                 for ioc in data[to_query]:                     if not ioc["ioc"][2]:                         ioc["ioc"][2] = " "                     res.update({PREFIX + key: value for key, value in ioc.items()})                     yield res             else:                 res.update(EMPTY_RTN)                 yield res     The method of "self.ti_compromise(to_query)" is to request other interfaces.   Can I modify the above method to concurrent processing on Splunk? If possible, which plan would be better。 Also, can the statistical information of Splunk receive list types, such as:   [ { "alert_name": "aaaaaaaaaaaa", "campaign": "", "confidence": "", "current_status": "", }, { "alert_name": "bbbbbbbbbbbb", "campaign": "", "confidence": "", "current_status": "", } ]        
Do you want to see the latest action by host AND login id or just the last action by login id? Anyway, the way to do this is by doing | stats max(_time) AS lastAttempt latest(anAction) as lastActio... See more...
Do you want to see the latest action by host AND login id or just the last action by login id? Anyway, the way to do this is by doing | stats max(_time) AS lastAttempt latest(anAction) as lastAction BY host aLoginID rather than putting action into the split by.
Hello @richgalloway, What is the difference between KV_Mode=auto and KV_Mode=none. Add-on builder is not supported in splunk cloud,how we can build the app in windows? Thanks..
Just use the lookup as a lookup - that's what it's intended for It's a little unclear what exists in the alert and what exists in the lookup based on this statement If systemname is found in the lo... See more...
Just use the lookup as a lookup - that's what it's intended for It's a little unclear what exists in the alert and what exists in the lookup based on this statement If systemname is found in the lookup table that matches on what is found in the alert, output systemname so I'm assuming you have an Alert Name in your data, so just so | lookup your_lookup_file.csv "Alert Name" OUTPUT "System Name" Assuming those are the names of your fields in the data/lookup (Alert Name) and the name of the field in the lookup is "System Name"
Wow thankyou for such a quick response. What is the maximum for Hot > Cold. The data size is negligible 17 mb for 3 months so no issues with disk size. Comparable to the security logs its a drop in t... See more...
Wow thankyou for such a quick response. What is the maximum for Hot > Cold. The data size is negligible 17 mb for 3 months so no issues with disk size. Comparable to the security logs its a drop in the ocean.
There is no "forever" setting for index retention.  You can set a very long retention time (10 years or more) and a large size (make sure the disk is big enough for all that data) and Splunk will kee... See more...
There is no "forever" setting for index retention.  You can set a very long retention time (10 years or more) and a large size (make sure the disk is big enough for all that data) and Splunk will keep the data long enough (probably until something forces you to reload the CMDB data).
We use splunk for data analysing and monitoring. We have the Service Now add in to collect CMDB data. It goes back and collects all the data then only collects new info on changes.  Therefore if we h... See more...
We use splunk for data analysing and monitoring. We have the Service Now add in to collect CMDB data. It goes back and collects all the data then only collects new info on changes.  Therefore if we have any logs at any point being set from hot/cold to cold/frozen it will remove the data points we require. The add-on is not setup to grab all the data again. This means we cannot lose any of that data otherwise the results wil be incomplete. I would like to make it so that the data never goes from hot/cold cold/frozen or have some input on how we can best make this scenario work. 
Thank you. I modified it a little and worked. 
In general, all subsearches have limitations, but most SQL people come to Splunk and think that join is the way to go, which is not the case. The first choice should always be to NOT use join. Appen... See more...
In general, all subsearches have limitations, but most SQL people come to Splunk and think that join is the way to go, which is not the case. The first choice should always be to NOT use join. Append will also have limits on the number of results - there are plenty of discussions on the topic and Splunk has documentation on these limits. So, really, if your data size is large, you need to be aware of these limits, but also from a performance point of view, join is not the best way to go and searches using join will impact on other users on the search head.
If you run the alert manually does it find any data? If you want to find an event that is generated at 12:30, then that event will probably not be picked up until 12:45 when the alert runs on your c... See more...
If you run the alert manually does it find any data? If you want to find an event that is generated at 12:30, then that event will probably not be picked up until 12:45 when the alert runs on your cron schedule. Your time range is set to last 15 minutes, so depending on exactly WHEN your alert runs, you may miss events, because if the event occurs at 12:30 and is indexed by Splunk at 12:30:04 and your search ran at 12:30:02 then it will not find it. The next search which might run at 12:45:06 will also not find it as it only searches between 12:30:06 and 12:45:06 So please set your search to run with exact time specifiers with "snap to time" using @m   
Hi All - Pretty new to Splunk and having an issue sorting/parsing data from our syslog server. We have many rhel7 linux hosts all sending their logs to one server where they get aggregated. This work... See more...
Hi All - Pretty new to Splunk and having an issue sorting/parsing data from our syslog server. We have many rhel7 linux hosts all sending their logs to one server where they get aggregated. This works fine. I can go into /var/log/secure, messages, etc. and see entries from all the hosts we have. We are running a splunkforwarder on this host with the hopes that it would be forwarding all the data to splunk as it hits the this rhel7 log aggregator. We just have a single head/indexer, and if I run a query "index="*" I do get quite a bit of results, BUT it only shows 2 hosts, the splunk instance and the rhel7 system that we are aggregating the logs on. If I change the search to "index="*" hostname"  with the hostname being one of the rhel hosts, I can find the entries specific to that host. I hope this makes sense? So somehow I need to tell Splunk about these hosts so they are recognized as separate hosts. What can I do to make this work? Thank you all in advance!  
The table _time _raw and spath effectively reparse the JSON otherwise you have the extracted files from the ingest as well as the fields from the spath. Without seeing the actual events, I can't tel... See more...
The table _time _raw and spath effectively reparse the JSON otherwise you have the extracted files from the ingest as well as the fields from the spath. Without seeing the actual events, I can't tell what might be causing the disparity between the counts and number of lines. Perhaps there are extra blank lines, or new line characters.
I have installed Splunk forwarder 9.1.1 on a linux server, but the user and group splunk was unable to be created from the rpm installation. I thought that could have fixed the issue as to why i kept... See more...
I have installed Splunk forwarder 9.1.1 on a linux server, but the user and group splunk was unable to be created from the rpm installation. I thought that could have fixed the issue as to why i kept getting an inactive forward-server, but I ended up getting a new error. when i try to restart splunk forwarder, i get the following error: splunkd is not running. "failed splunkd.pid doesn't exist" and when i try to have splunk forwarder list the forward-server, I get the following error 3 times: 'tcp_conn_open_afux ossocket_connect failed with no such file or directory' it still lists my server as an inactive one despite having another splunk forwarder linux host properly connecting to splunk enterprise via ssl connection. I have also made sure that the listening port (9997) is listened to by splunk. its the same port used by the other linux host to forward logs to
@tedgett  I ended up taking the existing dashboard and making my own version with the corrected queries.