All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

OK that is good. Do you see any logs coming from your SOAR host in the internal index at index=_internal ?   If yes, then can you see any errors when you filter the source to splunkd.log? (for me i... See more...
OK that is good. Do you see any logs coming from your SOAR host in the internal index at index=_internal ?   If yes, then can you see any errors when you filter the source to splunkd.log? (for me it's source="/opt/phantom/splunkforwarder/var/log/splunk/splunkd.log")   If no, then can you SSH into the SOAR machine and then read that splunkd.log file looking for errors? Usually the file is located at /opt/phantom/splunkforwarder/var/log/splunk/splunkd.log (depending on how big the logfile is, you could use "cat splunkd.log | grep ERROR")
Yes, I already set this up
Did you set up your SOAR to forward logs? Go to Administration->Administration Settings->Forwarder Settings->New Group Then add your indexers, e.g.: indexer1:9997 Check the boxes for which logs y... See more...
Did you set up your SOAR to forward logs? Go to Administration->Administration Settings->Forwarder Settings->New Group Then add your indexers, e.g.: indexer1:9997 Check the boxes for which logs you would like to see. Add an optional TCP token if it applies for your environment. Then if you save this configuration, your SOAR should start sending logs to Splunk Enterprise.   Ref: https://docs.splunk.com/Documentation/SOARApp/1.0.57/Install/ConnectremotesearchSOAR6.2 https://docs.splunk.com/Documentation/SOARonprem/latest/Admin/Forwarders
Hi Giuseppe, Thank you for your reply.  But just wanted to understand bit more on this, All default collections.conf and transforms.conf will be synced with Secondary SH from Primary SH. Once we br... See more...
Hi Giuseppe, Thank you for your reply.  But just wanted to understand bit more on this, All default collections.conf and transforms.conf will be synced with Secondary SH from Primary SH. Once we bring up the services on Secondary SH in future, It should populate all the data as is in primary as we got default KVstore in secondary as well ? Is my understanding correct ?  Regards VK  
I'm trying to use the Splunk App for SOAR to forward logs and events from SOAR to Splunk Enterprise. The servers seem to be connected (test connectivity works) but the data (events, playbook runs et... See more...
I'm trying to use the Splunk App for SOAR to forward logs and events from SOAR to Splunk Enterprise. The servers seem to be connected (test connectivity works) but the data (events, playbook runs etc.) isn't being indexed and doesn't appear in search in Splunk. I tried reindexing the data through SOAR but it didn't work. Adding audit input in the app is working fine, but data isn't being indexed in real time according to the supposed indexes (I did create them using the "Create Indexes" button in the app) Did anyone experience anything similar or has any idea as to what might be the issue?
Nope, No JSON. CEF events
Hi @VK18, yes, to have a running copy of the primary Search Head, you have to copy also the $SPLUNK_HOME/var/lib/splunk/kvstore folder from the primary to the secondary. Ciao. Giuseppe
Why not just: REGEX = authentication\sfailure
Hi All, I currently have a primary standalone Enterprise Security (ES) search head located in the main data center. Every day, a cronjob is executed to copy the entire /opt/splunk/etc/apps directory... See more...
Hi All, I currently have a primary standalone Enterprise Security (ES) search head located in the main data center. Every day, a cronjob is executed to copy the entire /opt/splunk/etc/apps directory to the secondary standalone Enterprise Security search head, which is located in the DR site. Now, the question arises: should I also copy the primary KVStore data, located in the var/lib directory, to the secondary ES search head? Currently, I'm only syncing the apps folder and not the var/lib directory. In the event of an issue with the primary search head in the future, I plan to bring up the secondary search head. Will there be any issues with the KVStore data if I'm not syncing the var/lib directory between the primary and secondary search heads? Note :Since we're not using any custom-made KVStore lookups and only depend on the default ones generated by different Enterprise Security apps, it makes us wonder if syncing the var/lib directory between the primary and secondary search heads is essential. Regards VK
For this problem, using the lookup in subsearch is more direct and potentially more efficient. |mstats sum(faliure.count) as Failed where index=metric-logs by service application_codes | search type... See more...
For this problem, using the lookup in subsearch is more direct and potentially more efficient. |mstats sum(faliure.count) as Failed where index=metric-logs by service application_codes | search type = error1 [inputlookup app.csv]  
This is an interesting challenge because of the leading underscore (_) in the root node key.  spath can't seem to recognize the path as is. (Could be a subtle bug.)  One potential way to work around ... See more...
This is an interesting challenge because of the leading underscore (_) in the root node key.  spath can't seem to recognize the path as is. (Could be a subtle bug.)  One potential way to work around this is to use text replacement to get rid of the underscore. But I prefer a more syntactic method.  In Splunk 9, you can do fronjson with an arbitrary prefix so spath can do its job. (You can also use fromjson repeatedly.  But with deep paths like this problem, that's undesirable.)   | fromjson jsondata prefix=my | spath input=my_embedded path=metadata.data.results{} | mvexpand metadata.data.results{} | spath input=metadata.data.results{} | foreach notifications.* [eval ErrorCode = if(isnull(ErrorCode), "<<MATCHSTR>>", ErrorCode), ErrorMessage = mvappend(ErrorMessage, '<<FIELD>>')] | stats count by ErrorCode ErrorMessage   Your sample data results in ErrorCode ErrorMessage count 212 Document not detected 1 212 The image could not be found 1 212 The image quality was poor 1 This is an emulation you can play with and compare with real data   | makeresults | eval jsondata = "{ \"_embedded\": { \"metadata\": { \"environment\": { \"id\": \"6b3dc\" }, \"data\": { \"results\": [ { \"documentId\": \"f18a20f1\", \"notifications\": { \"212\": \"The image quality was poor\" } }, { \"documentId\": \"f0fdf5e8c\", \"notifications\": { \"680\": \"The image could not be found\" } }, { \"documentId\": \"95619532\", \"notifications\": { \"809\": \"Document not detected\" } } ] } } } }"   Note: Once you get past that leading underscore in JSON path, you can then use the text manipulation method proposed by @ITWhisperer (with a small path correction), like this   | fromjson jsondata prefix=my | spath input=my_embedded path=metadata.data.results{} | mvexpand metadata.data.results{} ``` not metadata.data.results{}.notifications ``` | rex field=metadata.data.results{} "\"(?<ErrorCode>\d+)\":\s*\"(?<ErrorMessage>[^\"]*)\"" | stats count by ErrorCode ErrorMessage   But text manipulation on structured data is not as robust because a developer/software can always change format in the future without altering semantics.
Thanks @marnall this worked perfectly.
application_codes 0 1206 18 1729   i want to see only the above application codes, that is from csv file only.
Scott, Did you find a fix or workaround for this?  I am having the exact same issue.
Please can you give an example of your expected results?
|mstats sum(faliure.count) as Failed where index=metric-logs by service application_codes |lookup app.csv application_codes when i run the above query i am getting application_codes from mstats que... See more...
|mstats sum(faliure.count) as Failed where index=metric-logs by service application_codes |lookup app.csv application_codes when i run the above query i am getting application_codes from mstats query not from csv file
Try lookup of application_codes in csv and then filter by type
|mstats sum(faliure.count) as Failed where index=metric-logs by service application_codes Form the above query i am getting the results of service and application_codes. But my requirement is to ge... See more...
|mstats sum(faliure.count) as Failed where index=metric-logs by service application_codes Form the above query i am getting the results of service and application_codes. But my requirement is to get the application_codes from a csv file and  from only type=error1 below is the csv file application_codes Description Type 0 error descp 1 error1 10 error descp 2 error2 10870 error descp 3 error3 1206 error descp 1 error1 11 error descp 3 error3 17 error descp 2 error2 18 error descp 1 error1 14 error descp 2 error2 1729 error descp 1 error1    
Hi all, I have installed and configured  fortiweb for splunk app. The problem is that the time in the log is correct, but the time I receive in the Splunk time column is 7 hours different. It should... See more...
Hi all, I have installed and configured  fortiweb for splunk app. The problem is that the time in the log is correct, but the time I receive in the Splunk time column is 7 hours different. It should be mentioned that there is a field in the logs called timezone_dayst that it differs from my time zone by exactly 7 hours. I also added TZ = MyTimeZone to the props.conf of the app but problem still exists. For example, in the image below, it can be seen that the time is equal to 8:37, while the log time is equal to 1:07, and of course timezone_dayst has a drift (-3:30 instead of +3:30).    Any ideas are appreciated.
I recommend using the "where" command: index=indexname sourcetype=eventname | where result1 > 5 (note this assumes that result1 is already an extracted field. If not, try this: index=in... See more...
I recommend using the "where" command: index=indexname sourcetype=eventname | where result1 > 5 (note this assumes that result1 is already an extracted field. If not, try this: index=indexname sourcetype=eventname | rex field=_raw "result1=(?<result1>\d*)" | where result1 > 5