All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @CyberWolf , let me understand, you want only one field, called address containing city and state, is it correct? in this case, please try this: <your_search> | rex ".*\"city\"\:\"(?<city>[^\"]... See more...
Hi @CyberWolf , let me understand, you want only one field, called address containing city and state, is it correct? in this case, please try this: <your_search> | rex ".*\"city\"\:\"(?<city>[^\"]+)\"\,\"state\"\:"(?<test>[^\"]+)" | eval address=state." ".city Ciao. Giuseppe
Does anyone know if GlobalMantics dataset is available in the free version of splunk, or is it only included in the paid plans? If it is available in free version then how and where can i access that... See more...
Does anyone know if GlobalMantics dataset is available in the free version of splunk, or is it only included in the paid plans? If it is available in free version then how and where can i access that file?
thanks but i think i wasn't clear what im trying to do is that all that to be in one field called (?<address>) not separated with city and state
Thank you for the quick answer! The question here is whether the <new-value> can be a variable found within the event using a regexp that would extract the value.
transforms.conf [index_reset] SOURCE_KEY = _raw DEST_KEY = _MetaData:index REGEX = . FORMAT = index::<new-value> This searches the _raw data feed for the regex match (change my example), then... See more...
transforms.conf [index_reset] SOURCE_KEY = _raw DEST_KEY = _MetaData:index REGEX = . FORMAT = index::<new-value> This searches the _raw data feed for the regex match (change my example), then applies the FORMAT to the DEST_KEY. Test in development environment first to fine tune this process, it can be tricky to get the regex and format just right.
Creating the regex would be easy enough but it looks like your data is already coming in JSON or XML format.  Is there a chance that the fields are already extracted as "city" and "state"?  If not th... See more...
Creating the regex would be easy enough but it looks like your data is already coming in JSON or XML format.  Is there a chance that the fields are already extracted as "city" and "state"?  If not then I would recommend revisiting the ingestion props as a best practice.  Rather than creating a lot of regex at search time if you had that field extraction during indexing then any changes to data would auto extract new fields.   .*\"city\"\:\"(?<city>[^\"]+)\"\,\"state\"\:"(?<test>[^\"]+)  
Hi @CyberWolf , if the logs you have, you can use a regex like the following: | rex "^[^,]+,\"city\":\"(?<city>[^\"]+)\",\"state\":\"(?<state>[^\"]+)" that you can test at https://regex101.com/r/Z... See more...
Hi @CyberWolf , if the logs you have, you can use a regex like the following: | rex "^[^,]+,\"city\":\"(?<city>[^\"]+)\",\"state\":\"(?<state>[^\"]+)" that you can test at https://regex101.com/r/ZafgnI/1 I'd be more detailed if you can share a complete log, not onl a part of it. Ciao. Giuseppe
Are you trying to capture the data in a single field or multiple fields?  Is this to be done at index time or search time?
Thanks.  We have one perm license too. This on-perm env will be used for 2 days every quarter. 
Hi there im currently at a search to get the usage of Indexes, so i have an overview which indexes gets used in searches and which indexes doesnt so i can speak with the usecase owner if the data ... See more...
Hi there im currently at a search to get the usage of Indexes, so i have an overview which indexes gets used in searches and which indexes doesnt so i can speak with the usecase owner if the data is still needed and why it doesnt get used. Thats the current state of the search:   | rest "/services/data/indexes" | table title totalEventCount frozenTimePeriodInSecs | dedup title | append [search index=_audit sourcetype="audittrail" search_id="*" action=search earliest=-24h latest=now ``` Regex Extraction ``` | rex field=search max_match=0 "index\=\s*\"?(?<used_index>\S+)\"?" | rex field=search max_match=0 "\`(?<used_macro>\S+)\`" | rex field=search max_match=0 "eventtype\=\s*(?<used_evttype>\S+)" ``` Eventtype resolving ``` | mvexpand used_evttype | join type=left used_evttype [| rest "/services/saved/eventtypes" | table title search | stats values(search) as search by title | rename search as resolved_eventtype, title as used_evttype] | rex field=resolved_eventtype max_match=0 "eventtype\=\s*(?<nested_eventtype>\S+)" | mvexpand nested_eventtype | join type=left nested_eventtype [| rest "/services/saved/eventtypes" | table title search | stats values(search) as search by title | rename search as resolved_nested_eventtype, title as nested_eventtype] ``` Macro resolving ``` | mvexpand used_macro | join type=left used_macro [| rest "/servicesNS/-/-/admin/macros" count=0 | table title definition | stats values(definition) as definition by title | rename definition as resolved_macro, title as used_macro] | rex field=resolved_macro max_match=0 "\`(?<nested_macro>[^\`]+)\`" | mvexpand nested_macro | join type=left nested_macro [| rest "/servicesNS/-/-/admin/macros" count=0 | table title definition | stats values(definition) as definition by title | rename definition as resolved_nested_macro, title as nested_macro] | where like(resolved_nested_macro,"%index=%") OR isnull(resolved_nested_macro) ``` merge resolved stuff into one field ``` | foreach used* nested* [eval datasrc=mvdedup(if(<<FIELD>>!="",mvappend(datasrc, "<<FIELD>>"),datasrc))] | eval datasrc=mvfilter(!match(datasrc, "usedData")) | eval usedData = mvappend(used_index, if(!isnull(resolved_nested_eventtype),resolved_nested_eventtype, resolved_eventtype), if(!isnull(resolved_nested_macro),resolved_nested_macro, resolved_macro)) | eval usedData = mvdedup(usedData) | table app user action info search_id usedData datasrc | mvexpand usedData | eval usedData=replace(usedData, "\)","") | where !like(usedData, "`%`") AND !isnull(usedData) | rex field=usedData "index\=\s*\"?(?<usedData>[^\s\"]+)\"?" | eval usedData=replace(usedData, "\"","") | eval usedData=replace(usedData,"'","") | stats count by usedData ]    The search first gets the indexes via | rest with its eventcount and retentiontime. Then audittrail data gets appended and used Indexes, Macros and Eventtypes gets extracted from the searchstring and resolved (since some apps uses nested eventtypes/macros in my environment they get resolved twice). Still needs some sanitizing of the extracted used-indexes. that gives me a table like this (limited the table to splunkinternal indexes as example) title totalEventCount frozenTimePeriodInSecs count usedData _audit 771404957 188697600     _configtracker 717 2592000     _dsappevent 240 5184000     _dsclient 232 5184000     _dsphonehome 843820 604800     _internal 7039169453 15552000     _introspection 39100728 1209600     _telemetry 55990 63072000     _thefishbucket 0 2419200           22309 _*       1039 _audit       2 _configtracker       1340 _dsappevent       1017 _dsclient       1 _dsclient]       709 _dsphonehome       2089 _internal       117 _introspection       2 _metrics       2 _metrics_rollup       2 _telemetry       2 _thefishbucket But i didnt managed to merge the rows together so that i have count=1039 for _audit plus the 22309 from searches that uses all internal indexes  in one row for each index.
im trying to capture address, city and state that are in one line but they have ", : and , i would like to excluede (Quotes Coma and Colon) see test example below 12345 noth test Avenue","city":... See more...
im trying to capture address, city and state that are in one line but they have ", : and , i would like to excluede (Quotes Coma and Colon) see test example below 12345 noth test Avenue","city":"test","state":"test",
How to filter using text box with multiple keywords using comma separated.How to filter my table data. This is  my query      index=atvi_test sourcetype=ncc |rename hostname as Host component as ... See more...
How to filter using text box with multiple keywords using comma separated.How to filter my table data. This is  my query      index=atvi_test sourcetype=ncc |rename hostname as Host component as Component filename as FileName | eval source_list=split("*ORA*", ",")| search Environment=QTEST Component IN (*) |search NOT Message IN (null)| table PST_Time Environment Host Component FileName Message |sort PST_Time|search [| makemv delim="," source_list|eval search_condition=mvjoin(source_list, " OR Message=*")|eval search_condition="Message=*" . search_condition|return $search_condition]  
Hi, Perhaps this question has been asked before...  Is it possible to store events coming from the same source in different indexes, depending on their content? The use case is that some events are... See more...
Hi, Perhaps this question has been asked before...  Is it possible to store events coming from the same source in different indexes, depending on their content? The use case is that some events are more sensitive than others and need to be sent to different indexes. In our case, the index name would appear within the event, as a formatted field, like [index: SENSITIVE]. The input is a TCP port. Any help would be appreciated, and I prefer to take no as an answer than to be led into some intricate solution. Thank you, Jean
For the SPL you need to escape all backslashes and quotes. For regex101 it requires you to escape slashes by default (which is not a part of the regex requirement but part of the default PHP PCRE usa... See more...
For the SPL you need to escape all backslashes and quotes. For regex101 it requires you to escape slashes by default (which is not a part of the regex requirement but part of the default PHP PCRE usage syntax). SEDCMD uses raw regex.
Remove the ':' on the end of the regex and it should work. You can't get | makeresults and props to work at the same time.  makeresults creates synthetic events and props only work on real events.
Hi @richgalloway  Thanks for your reply. Apologies, for the delay in replying but I had to test it. Please see the results here: https://regex101.com/r/7u6vAP/1 Now I need to figure out as I have... See more...
Hi @richgalloway  Thanks for your reply. Apologies, for the delay in replying but I had to test it. Please see the results here: https://regex101.com/r/7u6vAP/1 Now I need to figure out as I have asked @ITWhisperer how to make both work the | makeresult | rex mode=sed ........ and the props SEDCMD-reducing_4702=? to work strip the event thus reducing its weight in bytes Thank you
Hi @ITWhisperer, Please have a look at https://regex101.com/r/wRe1Ai/1 That works in 101regex web portal, but it does not work under the makeresults and SEDCMD in props.conf I had to remove the (... See more...
Hi @ITWhisperer, Please have a look at https://regex101.com/r/wRe1Ai/1 That works in 101regex web portal, but it does not work under the makeresults and SEDCMD in props.conf I had to remove the (?ms).*(?<ei>\ as SEDCMD s/ would not accept it neither <ei> bit. Can you please work out the exact SEDCMD-reducing_4702=s/........g bit that will be compatible with the SEDCMD? Also can you try that in Splunk e.g. getting the | makeresult SPL and see if the one SPL you provide would work/remove the unwanted parts from the event? Thank you.
Hi Mario, Thanks, query worked as we input max can we do to alias the result filed name (toInt((tokenExpirationDateTime - now()) / (24*60*60*1000))))  to tokenExpirationDateTime
Again - there is no way to update an existing event within Splunk. So you can't have only the latest status. As simple as that. You can try to walk around that by maybe ingesting the state periodica... See more...
Again - there is no way to update an existing event within Splunk. So you can't have only the latest status. As simple as that. You can try to walk around that by maybe ingesting the state periodically and hold the state in a lookup or something similar but this approach doesn't scale well.
Thanks for your reply, i will try that before. If success i'll be back to Accept it as Solution so another people who have the same problem can use this step. Danke,  Zake