All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello PualPanther, We can't actually run a CURL command since we aren't sure what the webhook is. Reading over the document and contacting support, it seems that the app should create the webhook bu... See more...
Hello PualPanther, We can't actually run a CURL command since we aren't sure what the webhook is. Reading over the document and contacting support, it seems that the app should create the webhook but I'm not sure what it is.  If I need to create the webhook, I'm also not sure how to create one for the app either. 
Working on supplementing a search we are using to implement conditional access policies. The search identifies successful logins and produces a percentage of compliant logins over a period. What I am... See more...
Working on supplementing a search we are using to implement conditional access policies. The search identifies successful logins and produces a percentage of compliant logins over a period. What I am trying to add, is the last login time which is identified by the "createdDateTime" in the logs.  Here is the current search:  index="audit" sourcetype="signin" userPrincipalName="*domain.com" status.errorCode=0 | eval DeviceCompliance='deviceDetail.isCompliant' | chart count by userPrincipalName DeviceCompliance | eval total=true + false | rename true as compliant | eval percent=((compliant/total)*100) | table userPrincipalName compliant total percent I have tried adding / modifying pipes like "stats latest(createdDateTime) by userPrincilaName compliant total percent" but this is inserting the time into the true / false fields. I feel that I am modifying the data too much up front and maybe need to change around the piping order perhaps? All suggestions welcomed.
Final version... obviously inside a script or an interactive menu with parameters should work fine   curl -skL -u 'usr:pwd' 'https://SHC_NODE:8089/servicesNS/-/-/saved/searches' --get -d 'output... See more...
Final version... obviously inside a script or an interactive menu with parameters should work fine   curl -skL -u 'usr:pwd' 'https://SHC_NODE:8089/servicesNS/-/-/saved/searches' --get -d 'output_mode=json' -d 'count=0' | jq -r ' .entry[] | select(.acl.app == "MYAPP" and .acl.owner == "MYUSER") | .name + " : " + .acl.app + " : " + .author + " : " + .acl.owner + " : " + .acl.sharing + " : " + (.content.disabled|tostring) '   Alternative,   curl -skL -u 'usr:pwd' 'https://SHC_NODE:8089/servicesNS/-/-/saved/searches' --get -d 'output_mode=json' -d 'count=0' | jq -r ' .entry[] | select(.acl.app == "MYAPP" and .acl.owner == "MYUSER") | [.name,.acl.app,.author,.acl.owner,.acl.sharing,.content.disabled] | @csv '   Thanks all
example from the raw logs: "address":"1234 Nothing 2C Avenue","city":"something","state":"RD" would like to have field name Address Address 1234 Nothing 2C Avenue City something state RD all tha... See more...
example from the raw logs: "address":"1234 Nothing 2C Avenue","city":"something","state":"RD" would like to have field name Address Address 1234 Nothing 2C Avenue City something state RD all that in one field so ignore the " , and : What i have index=something | rex field=_raw "address\"\:\"(?<address>.*?)\"\,\" which shows field name: address 1234 Nothing 2C Avenue","city":"something","state":"RD"
all in a single field so i can used later to dc (distinguish count) 
correct it comes in json and i dont have control of it, but im trying to have is address city and state to be all in the field and ignore coma quotes and :
Hi @CyberWolf , let me understand, you want only one field, called address containing city and state, is it correct? in this case, please try this: <your_search> | rex ".*\"city\"\:\"(?<city>[^\"]... See more...
Hi @CyberWolf , let me understand, you want only one field, called address containing city and state, is it correct? in this case, please try this: <your_search> | rex ".*\"city\"\:\"(?<city>[^\"]+)\"\,\"state\"\:"(?<test>[^\"]+)" | eval address=state." ".city Ciao. Giuseppe
Does anyone know if GlobalMantics dataset is available in the free version of splunk, or is it only included in the paid plans? If it is available in free version then how and where can i access that... See more...
Does anyone know if GlobalMantics dataset is available in the free version of splunk, or is it only included in the paid plans? If it is available in free version then how and where can i access that file?
thanks but i think i wasn't clear what im trying to do is that all that to be in one field called (?<address>) not separated with city and state
Thank you for the quick answer! The question here is whether the <new-value> can be a variable found within the event using a regexp that would extract the value.
transforms.conf [index_reset] SOURCE_KEY = _raw DEST_KEY = _MetaData:index REGEX = . FORMAT = index::<new-value> This searches the _raw data feed for the regex match (change my example), then... See more...
transforms.conf [index_reset] SOURCE_KEY = _raw DEST_KEY = _MetaData:index REGEX = . FORMAT = index::<new-value> This searches the _raw data feed for the regex match (change my example), then applies the FORMAT to the DEST_KEY. Test in development environment first to fine tune this process, it can be tricky to get the regex and format just right.
Creating the regex would be easy enough but it looks like your data is already coming in JSON or XML format.  Is there a chance that the fields are already extracted as "city" and "state"?  If not th... See more...
Creating the regex would be easy enough but it looks like your data is already coming in JSON or XML format.  Is there a chance that the fields are already extracted as "city" and "state"?  If not then I would recommend revisiting the ingestion props as a best practice.  Rather than creating a lot of regex at search time if you had that field extraction during indexing then any changes to data would auto extract new fields.   .*\"city\"\:\"(?<city>[^\"]+)\"\,\"state\"\:"(?<test>[^\"]+)  
Hi @CyberWolf , if the logs you have, you can use a regex like the following: | rex "^[^,]+,\"city\":\"(?<city>[^\"]+)\",\"state\":\"(?<state>[^\"]+)" that you can test at https://regex101.com/r/Z... See more...
Hi @CyberWolf , if the logs you have, you can use a regex like the following: | rex "^[^,]+,\"city\":\"(?<city>[^\"]+)\",\"state\":\"(?<state>[^\"]+)" that you can test at https://regex101.com/r/ZafgnI/1 I'd be more detailed if you can share a complete log, not onl a part of it. Ciao. Giuseppe
Are you trying to capture the data in a single field or multiple fields?  Is this to be done at index time or search time?
Thanks.  We have one perm license too. This on-perm env will be used for 2 days every quarter. 
Hi there im currently at a search to get the usage of Indexes, so i have an overview which indexes gets used in searches and which indexes doesnt so i can speak with the usecase owner if the data ... See more...
Hi there im currently at a search to get the usage of Indexes, so i have an overview which indexes gets used in searches and which indexes doesnt so i can speak with the usecase owner if the data is still needed and why it doesnt get used. Thats the current state of the search:   | rest "/services/data/indexes" | table title totalEventCount frozenTimePeriodInSecs | dedup title | append [search index=_audit sourcetype="audittrail" search_id="*" action=search earliest=-24h latest=now ``` Regex Extraction ``` | rex field=search max_match=0 "index\=\s*\"?(?<used_index>\S+)\"?" | rex field=search max_match=0 "\`(?<used_macro>\S+)\`" | rex field=search max_match=0 "eventtype\=\s*(?<used_evttype>\S+)" ``` Eventtype resolving ``` | mvexpand used_evttype | join type=left used_evttype [| rest "/services/saved/eventtypes" | table title search | stats values(search) as search by title | rename search as resolved_eventtype, title as used_evttype] | rex field=resolved_eventtype max_match=0 "eventtype\=\s*(?<nested_eventtype>\S+)" | mvexpand nested_eventtype | join type=left nested_eventtype [| rest "/services/saved/eventtypes" | table title search | stats values(search) as search by title | rename search as resolved_nested_eventtype, title as nested_eventtype] ``` Macro resolving ``` | mvexpand used_macro | join type=left used_macro [| rest "/servicesNS/-/-/admin/macros" count=0 | table title definition | stats values(definition) as definition by title | rename definition as resolved_macro, title as used_macro] | rex field=resolved_macro max_match=0 "\`(?<nested_macro>[^\`]+)\`" | mvexpand nested_macro | join type=left nested_macro [| rest "/servicesNS/-/-/admin/macros" count=0 | table title definition | stats values(definition) as definition by title | rename definition as resolved_nested_macro, title as nested_macro] | where like(resolved_nested_macro,"%index=%") OR isnull(resolved_nested_macro) ``` merge resolved stuff into one field ``` | foreach used* nested* [eval datasrc=mvdedup(if(<<FIELD>>!="",mvappend(datasrc, "<<FIELD>>"),datasrc))] | eval datasrc=mvfilter(!match(datasrc, "usedData")) | eval usedData = mvappend(used_index, if(!isnull(resolved_nested_eventtype),resolved_nested_eventtype, resolved_eventtype), if(!isnull(resolved_nested_macro),resolved_nested_macro, resolved_macro)) | eval usedData = mvdedup(usedData) | table app user action info search_id usedData datasrc | mvexpand usedData | eval usedData=replace(usedData, "\)","") | where !like(usedData, "`%`") AND !isnull(usedData) | rex field=usedData "index\=\s*\"?(?<usedData>[^\s\"]+)\"?" | eval usedData=replace(usedData, "\"","") | eval usedData=replace(usedData,"'","") | stats count by usedData ]    The search first gets the indexes via | rest with its eventcount and retentiontime. Then audittrail data gets appended and used Indexes, Macros and Eventtypes gets extracted from the searchstring and resolved (since some apps uses nested eventtypes/macros in my environment they get resolved twice). Still needs some sanitizing of the extracted used-indexes. that gives me a table like this (limited the table to splunkinternal indexes as example) title totalEventCount frozenTimePeriodInSecs count usedData _audit 771404957 188697600     _configtracker 717 2592000     _dsappevent 240 5184000     _dsclient 232 5184000     _dsphonehome 843820 604800     _internal 7039169453 15552000     _introspection 39100728 1209600     _telemetry 55990 63072000     _thefishbucket 0 2419200           22309 _*       1039 _audit       2 _configtracker       1340 _dsappevent       1017 _dsclient       1 _dsclient]       709 _dsphonehome       2089 _internal       117 _introspection       2 _metrics       2 _metrics_rollup       2 _telemetry       2 _thefishbucket But i didnt managed to merge the rows together so that i have count=1039 for _audit plus the 22309 from searches that uses all internal indexes  in one row for each index.
im trying to capture address, city and state that are in one line but they have ", : and , i would like to excluede (Quotes Coma and Colon) see test example below 12345 noth test Avenue","city":... See more...
im trying to capture address, city and state that are in one line but they have ", : and , i would like to excluede (Quotes Coma and Colon) see test example below 12345 noth test Avenue","city":"test","state":"test",
How to filter using text box with multiple keywords using comma separated.How to filter my table data. This is  my query      index=atvi_test sourcetype=ncc |rename hostname as Host component as ... See more...
How to filter using text box with multiple keywords using comma separated.How to filter my table data. This is  my query      index=atvi_test sourcetype=ncc |rename hostname as Host component as Component filename as FileName | eval source_list=split("*ORA*", ",")| search Environment=QTEST Component IN (*) |search NOT Message IN (null)| table PST_Time Environment Host Component FileName Message |sort PST_Time|search [| makemv delim="," source_list|eval search_condition=mvjoin(source_list, " OR Message=*")|eval search_condition="Message=*" . search_condition|return $search_condition]  
Hi, Perhaps this question has been asked before...  Is it possible to store events coming from the same source in different indexes, depending on their content? The use case is that some events are... See more...
Hi, Perhaps this question has been asked before...  Is it possible to store events coming from the same source in different indexes, depending on their content? The use case is that some events are more sensitive than others and need to be sent to different indexes. In our case, the index name would appear within the event, as a formatted field, like [index: SENSITIVE]. The input is a TCP port. Any help would be appreciated, and I prefer to take no as an answer than to be led into some intricate solution. Thank you, Jean
For the SPL you need to escape all backslashes and quotes. For regex101 it requires you to escape slashes by default (which is not a part of the regex requirement but part of the default PHP PCRE usa... See more...
For the SPL you need to escape all backslashes and quotes. For regex101 it requires you to escape slashes by default (which is not a part of the regex requirement but part of the default PHP PCRE usage syntax). SEDCMD uses raw regex.