All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I would like to build  splunk attack range and perform series of attack on my splunk server using AWS. Do I need to create image of my server to do that? Is that even possible? How I can test my exis... See more...
I would like to build  splunk attack range and perform series of attack on my splunk server using AWS. Do I need to create image of my server to do that? Is that even possible? How I can test my existing infrastructure using this tool, instead of creating splunk server that is created with the tool automatically? I've already have read this docs: https://attack-range.readthedocs.io/en/latest/Attack_Range_AWS.html https://github.com/splunk/attack_range https://www.splunk.com/en_us/blog/security/attack-range-v3-0.html
Nope not working
@gcusello , Error in 'SearchOperator:regex': The regex '(?:ParentProcessName).+(?:C:\Program Files\Windows Defender Advanced Threat Protection\)' is invalid. Regex: unknown property after \P or \p... See more...
@gcusello , Error in 'SearchOperator:regex': The regex '(?:ParentProcessName).+(?:C:\Program Files\Windows Defender Advanced Threat Protection\)' is invalid. Regex: unknown property after \P or \p.    
Hi @Lavender , in this case, you have to add an additional condition: index=xyz component=gateway appid=12345 message="*|osv|*" | rex "trace-id.(?<RequestID>\d+)" | search RequestID=* | eval env=ma... See more...
Hi @Lavender , in this case, you have to add an additional condition: index=xyz component=gateway appid=12345 message="*|osv|*" | rex "trace-id.(?<RequestID>\d+)" | search RequestID=* | eval env=main_search | table _time Country Environment appID LogMessage env | append [search index=xyz appid=12345 message="*|osv|*" level="error" `mymacrocompo` | rex "trace-id.(?<RequestID>\d+)" | search RequestID=* | eval env=sub_search | table RequestID LogMessage1 env ] | stats earliest(_time) AS _time values(Country) AS Country values(Environment) AS Environment values(appID) AS appID values(LogMessage) AS LogMessage values(eval(if(level="error",LogMessage1, "NULL"))) AS Errorlogs dc(env) AS env_count BY RequestID | where env_count=2 Ciao. Giuseppe
Hi @AL3Z , run a search on the index where are stored the logs you filtered and, if your filter is applied on one or more hosts, eventually adding a filter on hosts. In the search use the same rege... See more...
Hi @AL3Z , run a search on the index where are stored the logs you filtered and, if your filter is applied on one or more hosts, eventually adding a filter on hosts. In the search use the same regex using the regex command (https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Regex). Something like this: index=windows host=<your_host> | regex "(?:ParentProcessName).+(?:C:\\Program Files\\Windows Defender Advanced Threat Protection\\)" Check the results and see if they arrive from the hosts you're waiting or not. Ciao. Giuseppe 
Owh ok, both working great as expected. Thank you for your assist on this.
Hi, I had blacklisted the "(?:ParentProcessName).+(?:C:\\Program Files\\Windows Defender Advanced Threat Protection\\)" in deployment server and applied it to  one of the windows server how we can t... See more...
Hi, I had blacklisted the "(?:ParentProcessName).+(?:C:\\Program Files\\Windows Defender Advanced Threat Protection\\)" in deployment server and applied it to  one of the windows server how we can trouble shoot whether it is applied or not ?  
@mad_splunker  index=someindex cluster=api uuid=api_uuid [ search index=someindex cluster=gw uuid=gw98037234c6e51a48816016172b8a3c56 | eval uuid="gw"+reqid | table uuid ]   Can you please try thi... See more...
@mad_splunker  index=someindex cluster=api uuid=api_uuid [ search index=someindex cluster=gw uuid=gw98037234c6e51a48816016172b8a3c56 | eval uuid="gw"+reqid | table uuid ]   Can you please try this? I have used different approach.    thanks KV
Hello Splunkers, I am trying below query -   index=someindex cluster=gw uuid=gw98037234c6e51a48816016172b8a3c56 | eval api_uuid="gw"+reqid | head 1 | append [search index=someindex cluster=api uui... See more...
Hello Splunkers, I am trying below query -   index=someindex cluster=gw uuid=gw98037234c6e51a48816016172b8a3c56 | eval api_uuid="gw"+reqid | head 1 | append [search index=someindex cluster=api uuid=api_uuid]   Basically what I am trying is to get result from first search, evaluate new field from first search and add it as condition to second search. It is not working if I supply api_uuid field but If I replace uuid in append with actual computed value it is returning proper result. I have seen few people using join but dont want to use join as its expensive and comes with limit. Any solution to above query ?
In results, we are getting Error Log message if available but our requirement is to get the log message only if Request ID is matching with RequestID of Sub Query. please help
If you want both fields you can either use rex to get both fields or split to split the string on the : character and then assign the first split to ip and the second to host. | rex field=Hostname "... See more...
If you want both fields you can either use rex to get both fields or split to split the string on the : character and then assign the first split to ip and the second to host. | rex field=Hostname "(?<ip>[^:]*):(?<host>.*)" OR | eval tmp=split(Hostname, ":") | eval ip=mvindex(tmp, 0), host=mvindex(tmp, 1) | fields - tmp rex is neater and you can make this an automatically extracted field, so you don't have to do it as part of the search.
Hi @bowesmana , thank you for your response. Your regex works great. if i want the ip on another field, do i need to use another regex?
Hi @gcusello ,   Thanks for your answers. 
Use rex | rex field=Hostname ".*:(?<host>.*)"  which will give you a new field called host with everything from the : to the end
When using a lookup, it's normal to just use that as a lookup rather than a data source using inputlook which you then have to join with your other data set as you are doing with your appendcols. If ... See more...
When using a lookup, it's normal to just use that as a lookup rather than a data source using inputlook which you then have to join with your other data set as you are doing with your appendcols. If this is your base search for data index=splunk-index | where message="start" | where NOT app IN("ddm", "wwe", "tygmk", "ujhy") |eval day= strftime(_time, "%A") |where _time >= relative_time(_time, "@d+4h") AND _time <= relative_time(_time, "@d+14h") |where NOT day IN("Tuesday", "Wednesday", "Thursday")  you just need to add the following to lookup the  | eval Event_Date=strftime(_time, "%m/%d/%Y") | lookup HolidayList.csv Holidays_Date as Event_Date OUTPUT Alert | where isnull(Alert) OR Alert!="App Relative Logs Data" I would also suggest you change your initial search to move the static search criteria in the where clause to the search and do the strftime just before it's needed, i.e. index=splunk-index message="start" NOT app IN("ddm", "wwe", "tygmk", "ujhy") | where _time >= relative_time(_time, "@d+4h") AND _time <= relative_time(_time, "@d+14h") | eval day=strftime(_time, "%A") | where NOT day IN("Tuesday", "Wednesday", "Thursday")  
Hi, i want to list out all the hostname in my tipwire log. but my hostname field are as below: Hostname 10.10.10.10 : Host A 192.0.0.0 : Host B My hostname and ip are mixed and in the same field... See more...
Hi, i want to list out all the hostname in my tipwire log. but my hostname field are as below: Hostname 10.10.10.10 : Host A 192.0.0.0 : Host B My hostname and ip are mixed and in the same field. How do i split the hostname, IP and list out all the hostname only. Please assist me on this. Thank you
I'm using Splunk to collect the state of Microsoft IIS web server app pools. I've noticed that when the Universal Forwarder collects Perfmon data that has instance names with spaces in, and when inge... See more...
I'm using Splunk to collect the state of Microsoft IIS web server app pools. I've noticed that when the Universal Forwarder collects Perfmon data that has instance names with spaces in, and when ingested into a Metrics index, that the instance name after the first space is lost. But this doesn't happen if I ingested into a normal index. Here is my configuration in the inputs.conf file: [perfmon://IISAppPoolState] interval = 10 object = APP_POOL_WAS counters = Current Application Pool State instances = * disabled = 0 index = metrics_index mode=single sourcetype = perfmon:IISAppPoolState It is on a machine which has IIS pools which have spaces in there names - ie "company website" "company portal" "HR web" When this data is ingested into the metrics index and accessed via the following Splunk command: | mstats latest(_value) as IISAppPoolState WHERE index=metrics_index metric_name="IISAppPoolState.Current Application Pool State" by instance, host I end up with instance values that truncate at the first space. So "company website" becomes just "company" (and who knows what happens to "company portal"). However if I direct the data into a normal index the instance names are wrapped in quotes and the space in the instance name persevered. Is there anyway to fix this behaviour? Collecting this data into a metrics index has worked fine until now but thanks to this server having IIS site names with spaces in them it's causing a real problem.   Thanks for your thoughts! Eddie
@BoldKnowsNothin - Please check to see if you have any errors/warnings from that host as suggested by @SanjayReddy . Also, check if Splunk service is run by a local user or System user on Windows an... See more...
@BoldKnowsNothin - Please check to see if you have any errors/warnings from that host as suggested by @SanjayReddy . Also, check if Splunk service is run by a local user or System user on Windows and check if that user running Splunk service has permission to read logs from that folder.   I hope this helps!!!
@Jana42855 - Your work done. Use Content Update App from Splunkbase -  https://splunkbase.splunk.com/app/3449    You can read about use cases inside the App from here - https://research.splunk.com/... See more...
@Jana42855 - Your work done. Use Content Update App from Splunkbase -  https://splunkbase.splunk.com/app/3449    You can read about use cases inside the App from here - https://research.splunk.com/detections/    I hope this helps!!! Kindly upvote if it does!!!
@Vani_26 - Try this [search index=splunk-index | where message="start" |where NOT app IN("ddm", "wwe", "tygmk", "ujhy") |eval day= strftime(_time, "%m/%d/%y") | search NOT [|inputlook HolidayList... See more...
@Vani_26 - Try this [search index=splunk-index | where message="start" |where NOT app IN("ddm", "wwe", "tygmk", "ujhy") |eval day= strftime(_time, "%m/%d/%y") | search NOT [|inputlook HolidayList.csv | where like(Alert, "App Relative Logs Data") | rename Holidays_date as day | fields day | table day]   Just to make sure, this will not suppress the alert on the holiday but rather suppress the alert for the data that is timestamped on the holiday. There is a minor difference.   I hope this helps!!! Kindly upvote if it does!!!