All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can try this to get the report in that format. Edit: Noticed that the chart method could mess up the order of dates from left to right so I think sorting first and then doing a transpose shoul... See more...
You can try this to get the report in that format. Edit: Noticed that the chart method could mess up the order of dates from left to right so I think sorting first and then doing a transpose should fix it.           source="/apps/WebMethods/IntegrationServer/instances/default/logs/DFO.log" | timechart span=1d limit=30 count as count by DFOINTERFACE | sort 0 +_time | eval timestamp=strftime(_time, "%m/%d/%Y") | fields + timestamp, * | fields - _* | transpose 30 header_field=timestamp | rename column as "DFOINTERFACE \ Date"           Example from my local instance.  
Can you clarify what technical addon you're using? Also, couldn't you ask your admin to clarify on the question you have originally? If you're using this addon here, then you can write a search usin... See more...
Can you clarify what technical addon you're using? Also, couldn't you ask your admin to clarify on the question you have originally? If you're using this addon here, then you can write a search using the LDAP command to write to an index with the collect command. Otherwise, whatever you're doing with the CSV file and then having a file monitoring to ingest the CSV is the long way to do it.
Hi @dtburrows3  its giving different result. I just want in reverse direction its giving me like this : but I want like this   
Hi All, I am using send email command to send csv file to different recepients based on the search .   | eval subject="This is test subject" ,email_body= "This is test email body" | map search=... See more...
Hi All, I am using send email command to send csv file to different recepients based on the search .   | eval subject="This is test subject" ,email_body= "This is test email body" | map search="|inputcsv test.csv | where owner=\"$email$\" | sendemail sendcsv=true to=\"$email$\" subject=\"$subject$\" message="\$email_body$\""   I want email body as "This is test email body". Instead I am getting "Search Results attached". I understand message depend on the arguments passed. As I am passing sendcsv=true, I am getting this. I am using sendcsv as I am sending results as attachment(csv file). Please let me know how can I pass custom message to email body. Regards, PNV
Did not know about the valid key entries. Thanks for sharing! Came across this documentation after reading your comment. https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/MonitorWindowseventl... See more...
Did not know about the valid key entries. Thanks for sharing! Came across this documentation after reading your comment. https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/MonitorWindowseventlogdata Oof and this right in inputs.conf docs  
"ProcessName" is not a valid key for a blacklist setting.  Valid keys are "Category, CategoryString, ComputerName, EventCode, EventType, Keywords, LogName, Message, OpCode, RecordNumber, Sid, SidType... See more...
"ProcessName" is not a valid key for a blacklist setting.  Valid keys are "Category, CategoryString, ComputerName, EventCode, EventType, Keywords, LogName, Message, OpCode, RecordNumber, Sid, SidType, SourceName, TaskCategory, Type, and User". Also, the RHS must be a valid regular expression.  A valid regex cannot begin with "*".  If you're trying to specify a wildcard at the beginning and end of the match then there's no need - that's implied with most regexes.
Give this a try blacklist3 = EventCode="4673" Process_Name=".*\\DesktopExtension\.exe.*"  From what I'm reading on Splunk docs it seems that it needs to be a valid regex to work. This regex se... See more...
Give this a try blacklist3 = EventCode="4673" Process_Name=".*\\DesktopExtension\.exe.*"  From what I'm reading on Splunk docs it seems that it needs to be a valid regex to work. This regex seems to match properly The original regex you posted doesn't seem to valid according to regex101 Also noticed that the Key you posted "ProcessName" is different then the field I see extracted on windows data on my local machine which is extracted as "Process_Name" but maybe that is how it is coming over in your environment. If that is the case then maybe this could work. blacklist3 = EventCode="4673" ProcessName=".*\\DesktopExtension\.exe.*"
Hi folks,  Happy new year to you all:-) In my org the Splunk deployment is as follows: Heavy forwarders running (HF1, HF2) > Collecting data from directories, HTTP > Sent to Splunk cloud (2 se... See more...
Hi folks,  Happy new year to you all:-) In my org the Splunk deployment is as follows: Heavy forwarders running (HF1, HF2) > Collecting data from directories, HTTP > Sent to Splunk cloud (2 search heads). Case: We have Active Directory add on HF1>which establishes connection to AD> write a CSV file in var/* of the host and > being indexed to the cloud.  admin said we have input which write data to index=asset_identity : I AM NOT SURE WHAT THE ADMIN WAS REFFERING TO? IS IT CONF FILE ON HF? 
Hello all, I am trying to blacklist this app that is generating a ton of Windows Event logs; till I find what app it is and uninstall it. This is for HP's DesktopExtension.exe. The weird thing is th... See more...
Hello all, I am trying to blacklist this app that is generating a ton of Windows Event logs; till I find what app it is and uninstall it. This is for HP's DesktopExtension.exe. The weird thing is that it is only running on about 30 devices.  Here is the current section in inputs.conf :  [WinEventLog://Security] disabled = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist3 = EventCode=4673 ProcessName="*\\DesktopExtension.exe*" renderXml=false index=oswinsec However even after restarting the splunk forwarder the events still appear. I verified one of the hosts has the correct inputs.conf. I have also tried blacklist3 = EventCode=4673 ProcessName="C:\Program Files\WindowsApps\AD2F1837.myHP_28.52349.1300.0_x64__v10z8vjag6ke6\win32\DesktopExtension.exe"" Here is an example of the log/event: LogName=Security EventCode=4673 EventType=0 ComputerName=********* SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=10115718 Keywords=Audit Failure TaskCategory=Sensitive Privilege Use OpCode=Info Message=A privileged service was called.   Subject: Security ID: ***************** Account Name: **************** Account Domain: *********** Logon ID: ****************   Service: Server: Security Service Name: -   Process: Process ID: 0x6604 Process Name: C:\Program Files\WindowsApps\AD2F1837.myHP_28.52349.1300.0_x64__v10z8vjag6ke6\win32\DesktopExtension.exe   Service Request Information: Any tips?
Assuming that your events have proper timestamps extracted to the _time field you should be able to do this.     source="/apps/WebMethods/IntegrationServer/instances/default/logs/DFO.log" ... See more...
Assuming that your events have proper timestamps extracted to the _time field you should be able to do this.     source="/apps/WebMethods/IntegrationServer/instances/default/logs/DFO.log" | timechart limit=30 span=1d count as count by DFOINTERFACE    
I am getting the count of each interface, but I need it date wise as example below : please help to modify my query
What do you mean by "calls"?  If you mean API calls, there is no limit I know of. Data retrieval is not limited by time period.  Query results are limited in the amount of disk space they can use, w... See more...
What do you mean by "calls"?  If you mean API calls, there is no limit I know of. Data retrieval is not limited by time period.  Query results are limited in the amount of disk space they can use, with each role having its own configurable limit (100MB is the default).  Once the limit is reached, old jobs must be deleted to free up disk space. Data ingestion is limited only by the power of the indexer(s).  The I/O rate of the storage system is a key factor, however.  HEC inputs tend to be faster, but have a limit of 1MB per transmission. Data loss is possible a number of ways.  For example, if an indexer goes down and the sender does not retry the transmission then data could be lost.  We'll need to know more specifics about your environment to discuss other ways data could be lost.
try this instead   index="aws_cloud" eventName IN ("value1", "value2", "value3")   I believe the format you posted is searching eventName="value1" OR any raw log containing the strings "value... See more...
try this instead   index="aws_cloud" eventName IN ("value1", "value2", "value3")   I believe the format you posted is searching eventName="value1" OR any raw log containing the strings "value2" OR "value3" even if "value2" OR "value3" isn't the actual value of eventName for that particular event.
one for the search query  from splunk AWS  index="aws_cloud" | search eventname="value1" OR "value2" OR "value3"  The above search query is giving the events for the all the above searched one also... See more...
one for the search query  from splunk AWS  index="aws_cloud" | search eventname="value1" OR "value2" OR "value3"  The above search query is giving the events for the all the above searched one also giving one more value which didn't searched  eventName: LookupEvents ==> getting this field and value which didn't search 
Hi, Thank you for your aswer. I'm trying to prevent SC4S to send via HEC syslog-ng logs, metrics and any other traffic besides the actual logs becase we have a low ressources environment. In /opt/... See more...
Hi, Thank you for your aswer. I'm trying to prevent SC4S to send via HEC syslog-ng logs, metrics and any other traffic besides the actual logs becase we have a low ressources environment. In /opt/sc4s/local/config/destinations/block_me.conf: destination d_block_metrics { file("/dev/null"); }; And in /opt/sc4s/local/config/log_paths/block_me.conf: log { source(s_internal); source(s_system); #destination(d_hec_debug); destination(d_block_metrics); flags(final); }; I guess I'm doing something wrong because even with flags(final); all metrics and errors are still being sent to Splunk. I just need to restrict  ressources because used memory grows uncontrollably until it reaches the 256Mb allocated to the container. Thank's a lot Daniel
As far as I know, any index that receives results of a scheduled report is considered a summary index (i.e. using the collect command in a search or configuration of the "action.summary_index" parame... See more...
As far as I know, any index that receives results of a scheduled report is considered a summary index (i.e. using the collect command in a search or configuration of the "action.summary_index" parameter in savedsearches.conf. To look for savedsearches using either one of these methods you can search the rest endpoint like this.             | rest splunk_server=local /servicesNS/-/-/saved/searches | fields + title, qualifiedSearch, "action.summary_index", "action.summary_index.*" | where match(qualifiedSearch, "(?i)\|(?:\s|\n)*collect") OR ('action.summary_index'=="1" OR match('action.summary_index', "(?i)true")) | rename title as savedsearch_name | rex field=qualifiedSearch max_match=0 "(?<collect_spl>\|\s*collect\s+[^\n]+)" | fields + savedsearch_name, collect_spl, "action.summary_index", "action.summary_index.*"             From here you could set up regex to extract index/sourcetype from the "collect_spl" field or use the "action.summary_index.*" values to gather that info. Its possible for the "collect_spl" field to contain only index and even then, that index specification could be stored in a macro, so those situations may be a bit more tricky. It is also possible for a parameter called "output_format=hec" to be used along with the collect command and if this is the case then, sourcetype and source will not be specified with the collect command and are rather defined in the SPL itself. You can see examples of these scenarios here To use this method to the end result of a report listing index/sourcetypes that are being utilized as a summary index you can use SPL like this. (Note: there is a custom splunk command being used in this code that expands macros all the way down before we attempt to do any extractions of collect metadata. You can DM me if you would want me to share the script to do this) | rest splunk_server=local /servicesNS/-/-/saved/searches | fields + title, qualifiedSearch, "action.summary_index", "action.summary_index.*" | where match(qualifiedSearch, "(?i)\|(?:\s|\n)*collect") OR ('action.summary_index'=="1" OR match('action.summary_index', "(?i)true")) | rename title as savedsearch_name ``` this is a splunk custom command I created, reach out to me through DM and I can share the code ``` | expandmacros input_field=qualifiedSearch output_field=expanded_spl | rex field=expanded_spl max_match=0 "(?<collect_spl>\|\s*collect\s+[^\n]+)" | where isnotnull(collect_spl) OR ('action.summary_index'=="1" OR match('action.summary_index', "(?i)true")) | fields + savedsearch_name, collect_spl, expanded_spl, "action.summary_index", "action.summary_index.*" | rex field=expanded_spl max_match=0 "(?i)\|\s*(?<eval_spl>eval\s+[^\|]+)" | eval eval_spl=mvfilter(match(eval_spl, "\s+source(?:type)?\"?\s*\=\s*\"")) | rex field=eval_spl max_match=0 "\s+sourcetype\"?\s*\=\s*\"(?<inline_set_sourcetype>[^\"]+)" | rex field=eval_spl max_match=0 "\s+source\"?\s*\=\s*\"(?<inline_set_source>[^\"]+)" | rex field=collect_spl max_match=0 "index\s*\=\s*\"?(?<summary_index>[a-zA-Z0-9\-\_]+)" | rex field=collect_spl max_match=0 "sourcetype\s*\=\s*\"?(?<summary_sourcetype>[a-zA-Z0-9\-\_]+)" | rex field=collect_spl max_match=0 "source\s*\=\s*\"?(?<summary_source>[a-zA-Z0-9\-\_]+)" | fields + savedsearch_name, collect_spl, summary_index, summary_sourcetype, summary_source, inline_set_sourcetype, inline_set_source, "action.summary_index", "action.summary_index.*" | eval summary_index=mvdedup( mvappend( 'summary_index', 'action.summary_index._name' ) ), summary_sourcetype=mvdedup( mvappend( summary_sourcetype, inline_set_sourcetype ) ), summary_source=mvdedup( mvappend( summary_source, inline_set_source ) ) | fillnull value="stash" summary_sourcetype | fields - inline_* | stats dc(savedsearch_name) as dc_savedsearches by summary_index, summary_sourcetype | sort 0 -dc_savedsearches  Final output would look something like this. (screenshot has been redacted)  
I am new to splunk, and need help configuring the log files collected from my honeypot to monitoring VM. They are on the same network and can ping each other. The source is acknowledged via the splun... See more...
I am new to splunk, and need help configuring the log files collected from my honeypot to monitoring VM. They are on the same network and can ping each other. The source is acknowledged via the splunk dashboards, but not sure which VM I am supposed to edit the input and output configuration files and any other edits.
Hi @tlmayes, before restarting, open a case to Splunk Support, sending them a diag. Ciao. Giuseppe
Hi @cybermonday could you fix the problem?
I want to get the list of summary index configured in splunk. Please help me with queries to get the summary index and sourcetype