All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you just want the application codes, why are you doing the mstats? | inputlookup app.csv | where Type="error1" | table application_codes
i had that feeing to, but it is still not working, more over , i tried the simplest query that does nor inclde any specail chars and it is still throwing the same error. this also fails:  index=*... See more...
i had that feeing to, but it is still not working, more over , i tried the simplest query that does nor inclde any specail chars and it is still throwing the same error. this also fails:  index=* | eval bytes=10+20 | table id.orig_h, id.resp_h, bytes    
The Github app for Splunk supports both Github and Splunk Cloud, it should be possible to set up the first part of your log path using the documentation for it.   https://splunkbase.splunk.com/app/... See more...
The Github app for Splunk supports both Github and Splunk Cloud, it should be possible to set up the first part of your log path using the documentation for it.   https://splunkbase.splunk.com/app/5596    
I would recommend making the following checks: 1. The props.conf file is on the indexer machines 2. The props.conf file is readable by the splunk user 3. The TZ value in the props.conf file reflec... See more...
I would recommend making the following checks: 1. The props.conf file is on the indexer machines 2. The props.conf file is readable by the splunk user 3. The TZ value in the props.conf file reflects the timezone of the logs 4. In your Splunk User Preferences in the webUI, your timezone is set to your current timezone
Sorry, my mistake - the IP address in the errors in the log file belongs to antoher Splunk server that is turned off. I don't see any errors with the correct IP.
I don't see any events when filtering index=_internal and source=<path_to_splunkd.log> (with my path obviously) but I do see errors when looking in the splunkd.log file in my SOAR machine - lots of ... See more...
I don't see any events when filtering index=_internal and source=<path_to_splunkd.log> (with my path obviously) but I do see errors when looking in the splunkd.log file in my SOAR machine - lots of "connection to host <indexer>:9997 failed", which is weird because 9997 is open on the splunk indexer, the machines are in the same segment and the "test connectivity" worked.
OK that is good. Do you see any logs coming from your SOAR host in the internal index at index=_internal ?   If yes, then can you see any errors when you filter the source to splunkd.log? (for me i... See more...
OK that is good. Do you see any logs coming from your SOAR host in the internal index at index=_internal ?   If yes, then can you see any errors when you filter the source to splunkd.log? (for me it's source="/opt/phantom/splunkforwarder/var/log/splunk/splunkd.log")   If no, then can you SSH into the SOAR machine and then read that splunkd.log file looking for errors? Usually the file is located at /opt/phantom/splunkforwarder/var/log/splunk/splunkd.log (depending on how big the logfile is, you could use "cat splunkd.log | grep ERROR")
Yes, I already set this up
Did you set up your SOAR to forward logs? Go to Administration->Administration Settings->Forwarder Settings->New Group Then add your indexers, e.g.: indexer1:9997 Check the boxes for which logs y... See more...
Did you set up your SOAR to forward logs? Go to Administration->Administration Settings->Forwarder Settings->New Group Then add your indexers, e.g.: indexer1:9997 Check the boxes for which logs you would like to see. Add an optional TCP token if it applies for your environment. Then if you save this configuration, your SOAR should start sending logs to Splunk Enterprise.   Ref: https://docs.splunk.com/Documentation/SOARApp/1.0.57/Install/ConnectremotesearchSOAR6.2 https://docs.splunk.com/Documentation/SOARonprem/latest/Admin/Forwarders
Hi Giuseppe, Thank you for your reply.  But just wanted to understand bit more on this, All default collections.conf and transforms.conf will be synced with Secondary SH from Primary SH. Once we br... See more...
Hi Giuseppe, Thank you for your reply.  But just wanted to understand bit more on this, All default collections.conf and transforms.conf will be synced with Secondary SH from Primary SH. Once we bring up the services on Secondary SH in future, It should populate all the data as is in primary as we got default KVstore in secondary as well ? Is my understanding correct ?  Regards VK  
I'm trying to use the Splunk App for SOAR to forward logs and events from SOAR to Splunk Enterprise. The servers seem to be connected (test connectivity works) but the data (events, playbook runs et... See more...
I'm trying to use the Splunk App for SOAR to forward logs and events from SOAR to Splunk Enterprise. The servers seem to be connected (test connectivity works) but the data (events, playbook runs etc.) isn't being indexed and doesn't appear in search in Splunk. I tried reindexing the data through SOAR but it didn't work. Adding audit input in the app is working fine, but data isn't being indexed in real time according to the supposed indexes (I did create them using the "Create Indexes" button in the app) Did anyone experience anything similar or has any idea as to what might be the issue?
Nope, No JSON. CEF events
Hi @VK18, yes, to have a running copy of the primary Search Head, you have to copy also the $SPLUNK_HOME/var/lib/splunk/kvstore folder from the primary to the secondary. Ciao. Giuseppe
Why not just: REGEX = authentication\sfailure
Hi All, I currently have a primary standalone Enterprise Security (ES) search head located in the main data center. Every day, a cronjob is executed to copy the entire /opt/splunk/etc/apps directory... See more...
Hi All, I currently have a primary standalone Enterprise Security (ES) search head located in the main data center. Every day, a cronjob is executed to copy the entire /opt/splunk/etc/apps directory to the secondary standalone Enterprise Security search head, which is located in the DR site. Now, the question arises: should I also copy the primary KVStore data, located in the var/lib directory, to the secondary ES search head? Currently, I'm only syncing the apps folder and not the var/lib directory. In the event of an issue with the primary search head in the future, I plan to bring up the secondary search head. Will there be any issues with the KVStore data if I'm not syncing the var/lib directory between the primary and secondary search heads? Note :Since we're not using any custom-made KVStore lookups and only depend on the default ones generated by different Enterprise Security apps, it makes us wonder if syncing the var/lib directory between the primary and secondary search heads is essential. Regards VK
For this problem, using the lookup in subsearch is more direct and potentially more efficient. |mstats sum(faliure.count) as Failed where index=metric-logs by service application_codes | search type... See more...
For this problem, using the lookup in subsearch is more direct and potentially more efficient. |mstats sum(faliure.count) as Failed where index=metric-logs by service application_codes | search type = error1 [inputlookup app.csv]  
This is an interesting challenge because of the leading underscore (_) in the root node key.  spath can't seem to recognize the path as is. (Could be a subtle bug.)  One potential way to work around ... See more...
This is an interesting challenge because of the leading underscore (_) in the root node key.  spath can't seem to recognize the path as is. (Could be a subtle bug.)  One potential way to work around this is to use text replacement to get rid of the underscore. But I prefer a more syntactic method.  In Splunk 9, you can do fronjson with an arbitrary prefix so spath can do its job. (You can also use fromjson repeatedly.  But with deep paths like this problem, that's undesirable.)   | fromjson jsondata prefix=my | spath input=my_embedded path=metadata.data.results{} | mvexpand metadata.data.results{} | spath input=metadata.data.results{} | foreach notifications.* [eval ErrorCode = if(isnull(ErrorCode), "<<MATCHSTR>>", ErrorCode), ErrorMessage = mvappend(ErrorMessage, '<<FIELD>>')] | stats count by ErrorCode ErrorMessage   Your sample data results in ErrorCode ErrorMessage count 212 Document not detected 1 212 The image could not be found 1 212 The image quality was poor 1 This is an emulation you can play with and compare with real data   | makeresults | eval jsondata = "{ \"_embedded\": { \"metadata\": { \"environment\": { \"id\": \"6b3dc\" }, \"data\": { \"results\": [ { \"documentId\": \"f18a20f1\", \"notifications\": { \"212\": \"The image quality was poor\" } }, { \"documentId\": \"f0fdf5e8c\", \"notifications\": { \"680\": \"The image could not be found\" } }, { \"documentId\": \"95619532\", \"notifications\": { \"809\": \"Document not detected\" } } ] } } } }"   Note: Once you get past that leading underscore in JSON path, you can then use the text manipulation method proposed by @ITWhisperer (with a small path correction), like this   | fromjson jsondata prefix=my | spath input=my_embedded path=metadata.data.results{} | mvexpand metadata.data.results{} ``` not metadata.data.results{}.notifications ``` | rex field=metadata.data.results{} "\"(?<ErrorCode>\d+)\":\s*\"(?<ErrorMessage>[^\"]*)\"" | stats count by ErrorCode ErrorMessage   But text manipulation on structured data is not as robust because a developer/software can always change format in the future without altering semantics.
Thanks @marnall this worked perfectly.
application_codes 0 1206 18 1729   i want to see only the above application codes, that is from csv file only.
Scott, Did you find a fix or workaround for this?  I am having the exact same issue.