All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

When using a lookup, it's normal to just use that as a lookup rather than a data source using inputlook which you then have to join with your other data set as you are doing with your appendcols. If ... See more...
When using a lookup, it's normal to just use that as a lookup rather than a data source using inputlook which you then have to join with your other data set as you are doing with your appendcols. If this is your base search for data index=splunk-index | where message="start" | where NOT app IN("ddm", "wwe", "tygmk", "ujhy") |eval day= strftime(_time, "%A") |where _time >= relative_time(_time, "@d+4h") AND _time <= relative_time(_time, "@d+14h") |where NOT day IN("Tuesday", "Wednesday", "Thursday")  you just need to add the following to lookup the  | eval Event_Date=strftime(_time, "%m/%d/%Y") | lookup HolidayList.csv Holidays_Date as Event_Date OUTPUT Alert | where isnull(Alert) OR Alert!="App Relative Logs Data" I would also suggest you change your initial search to move the static search criteria in the where clause to the search and do the strftime just before it's needed, i.e. index=splunk-index message="start" NOT app IN("ddm", "wwe", "tygmk", "ujhy") | where _time >= relative_time(_time, "@d+4h") AND _time <= relative_time(_time, "@d+14h") | eval day=strftime(_time, "%A") | where NOT day IN("Tuesday", "Wednesday", "Thursday")  
Hi, i want to list out all the hostname in my tipwire log. but my hostname field are as below: Hostname 10.10.10.10 : Host A 192.0.0.0 : Host B My hostname and ip are mixed and in the same field... See more...
Hi, i want to list out all the hostname in my tipwire log. but my hostname field are as below: Hostname 10.10.10.10 : Host A 192.0.0.0 : Host B My hostname and ip are mixed and in the same field. How do i split the hostname, IP and list out all the hostname only. Please assist me on this. Thank you
I'm using Splunk to collect the state of Microsoft IIS web server app pools. I've noticed that when the Universal Forwarder collects Perfmon data that has instance names with spaces in, and when inge... See more...
I'm using Splunk to collect the state of Microsoft IIS web server app pools. I've noticed that when the Universal Forwarder collects Perfmon data that has instance names with spaces in, and when ingested into a Metrics index, that the instance name after the first space is lost. But this doesn't happen if I ingested into a normal index. Here is my configuration in the inputs.conf file: [perfmon://IISAppPoolState] interval = 10 object = APP_POOL_WAS counters = Current Application Pool State instances = * disabled = 0 index = metrics_index mode=single sourcetype = perfmon:IISAppPoolState It is on a machine which has IIS pools which have spaces in there names - ie "company website" "company portal" "HR web" When this data is ingested into the metrics index and accessed via the following Splunk command: | mstats latest(_value) as IISAppPoolState WHERE index=metrics_index metric_name="IISAppPoolState.Current Application Pool State" by instance, host I end up with instance values that truncate at the first space. So "company website" becomes just "company" (and who knows what happens to "company portal"). However if I direct the data into a normal index the instance names are wrapped in quotes and the space in the instance name persevered. Is there anyway to fix this behaviour? Collecting this data into a metrics index has worked fine until now but thanks to this server having IIS site names with spaces in them it's causing a real problem.   Thanks for your thoughts! Eddie
@BoldKnowsNothin - Please check to see if you have any errors/warnings from that host as suggested by @SanjayReddy . Also, check if Splunk service is run by a local user or System user on Windows an... See more...
@BoldKnowsNothin - Please check to see if you have any errors/warnings from that host as suggested by @SanjayReddy . Also, check if Splunk service is run by a local user or System user on Windows and check if that user running Splunk service has permission to read logs from that folder.   I hope this helps!!!
@Jana42855 - Your work done. Use Content Update App from Splunkbase -  https://splunkbase.splunk.com/app/3449    You can read about use cases inside the App from here - https://research.splunk.com/... See more...
@Jana42855 - Your work done. Use Content Update App from Splunkbase -  https://splunkbase.splunk.com/app/3449    You can read about use cases inside the App from here - https://research.splunk.com/detections/    I hope this helps!!! Kindly upvote if it does!!!
@Vani_26 - Try this [search index=splunk-index | where message="start" |where NOT app IN("ddm", "wwe", "tygmk", "ujhy") |eval day= strftime(_time, "%m/%d/%y") | search NOT [|inputlook HolidayList... See more...
@Vani_26 - Try this [search index=splunk-index | where message="start" |where NOT app IN("ddm", "wwe", "tygmk", "ujhy") |eval day= strftime(_time, "%m/%d/%y") | search NOT [|inputlook HolidayList.csv | where like(Alert, "App Relative Logs Data") | rename Holidays_date as day | fields day | table day]   Just to make sure, this will not suppress the alert on the holiday but rather suppress the alert for the data that is timestamped on the holiday. There is a minor difference.   I hope this helps!!! Kindly upvote if it does!!!
@991423214 - There aren't a lot of details in your error messages. But below line suggest there could possibly be a connectivity issue between your machine and Splunk. At C:\Users\myusername\OneDr... See more...
@991423214 - There aren't a lot of details in your error messages. But below line suggest there could possibly be a connectivity issue between your machine and Splunk. At C:\Users\myusername\OneDrive\Desktop\Lab3.ps1:29 char:1 Also, I see you are trying to send events via HEC (Http Event Collector). Have you enabled the HEC from Global settings?   I hope this helps!! Kindly upvote if it does!!!
I have the same problem. How did you solve it?
I am trying to send events from my host machine to splunk using HEC. My Function: Invoke-RestMethod -Method Post -Uri $hecUri -Headers @{"Authorization" = "Splunk $hecToken"} -Body $jsonEventData -... See more...
I am trying to send events from my host machine to splunk using HEC. My Function: Invoke-RestMethod -Method Post -Uri $hecUri -Headers @{"Authorization" = "Splunk $hecToken"} -Body $jsonEventData -ContentType "application/json" Error:  Invoke-RestMethod : Unable to connect to the remote server At C:\Users\myusername\OneDrive\Desktop\Lab3.ps1:29 char:1 + Invoke-RestMethod -Method Post -Uri $hecUri -Headers @{"Authorization ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Invoke-RestMethod], WebException + FullyQualifiedErrorId : System.Net.WebException,Microsoft.PowerShell.Commands.InvokeRestMethodCommand
As stated in my original reply, the settings go on indexers and/or heavy forwarders.  You can put them on search heads, but they won't do any good.  Unless, that is, you have a standalone system (com... See more...
As stated in my original reply, the settings go on indexers and/or heavy forwarders.  You can put them on search heads, but they won't do any good.  Unless, that is, you have a standalone system (combined indexer and search head).
Does your data in Splunk contain CPU and memory information, if so, what does the data look and what fields do you currently have extracted and visible as fields? You need to provide more informatio... See more...
Does your data in Splunk contain CPU and memory information, if so, what does the data look and what fields do you currently have extracted and visible as fields? You need to provide more information for someone to be able to help. I am guessing that you want to find CPU and memory information from the data relating to that query, but the question could also be read as what CPU and memory are consumed when you are running the query.  
With MLTK, when looking at accumulated runtime, the outliers are detected cleanly (three out of three spikes), whereas with the anomaly detection app, only two of the three spikes are detected (along... See more...
With MLTK, when looking at accumulated runtime, the outliers are detected cleanly (three out of three spikes), whereas with the anomaly detection app, only two of the three spikes are detected (along with one false positive, even at medium sensitivity). The code generated by the MLTK is as follows -   index=_audit host=XXXXXXXX action=search info=completed | table _time host total_run_time savedsearch_name | eval total_run_time_mins=total_run_time/60 | convert ctime(search_*) | eval savedsearch_name=if(savedsearch_name="","Ad-hoc",savedsearch_name) | search savedsearch_name!="_ACCEL*" AND savedsearch_name!="Ad-hoc" | timechart span=30m median(total_run_time_mins) | eval "atf_hour_of_day"=strftime(_time, "%H"), "atf_day_of_week"=strftime(_time, "%w-%A"), "atf_day_of_month"=strftime(_time, "%e"), "atf_month" = strftime(_time, "%m-%B") | eventstats dc("atf_hour_of_day"),dc("atf_day_of_week"),dc("atf_day_of_month"),dc("atf_month") | eval "atf_hour_of_day"=if('dc(atf_hour_of_day)'<2, null(), 'atf_hour_of_day'),"atf_day_of_week"=if('dc(atf_day_of_week)'<2, null(), 'atf_day_of_week'),"atf_day_of_month"=if('dc(atf_day_of_month)'<2, null(), 'atf_day_of_month'),"atf_month"=if('dc(atf_month)'<2, null(), 'atf_month') | fields - "dc(atf_hour_of_day)","dc(atf_day_of_week)","dc(atf_day_of_month)","dc(atf_month)" | eval "_atf_hour_of_day_copy"=atf_hour_of_day,"_atf_day_of_week_copy"=atf_day_of_week,"_atf_day_of_month_copy"=atf_day_of_month,"_atf_month_copy"=atf_month | fields - "atf_hour_of_day","atf_day_of_week","atf_day_of_month","atf_month" | rename "_atf_hour_of_day_copy" as "atf_hour_of_day","_atf_day_of_week_copy" as "atf_day_of_week","_atf_day_of_month_copy" as "atf_day_of_month","_atf_month_copy" as "atf_month" | fit DensityFunction "median(total_run_time_mins)" by "atf_hour_of_day" dist=expon threshold=0.01 show_density=true show_options="feature_variables,split_by,params" into "_exp_draft_ca4283816029483bb0ebe68319e5c3e7"   And the code generated by the anomaly detection app -   ``` Same data as above ``` | dedup _time | sort 0 _time | table _time XXXX | interpolatemissingvalues value_field="XXXX" | fit AutoAnomalyDetection XXXX job_name=test sensitivity=1 | table _time, XXXX, isOutlier, anomConf     The major code difference is that with MLTK, we use -   | fit DensityFunction "median(total_run_time_mins)" by "atf_hour_of_day" dist=expon threshold=0.01 show_density=true show_options="feature_variables,split_by,params" into "_exp_draft_ca4283816029483bb0ebe68319e5c3e7"   whereas with the anomaly detection app, we use -   | fit AutoAnomalyDetection XXXX job_name=test sensitivity=1 | table _time, XXXX, isOutlier, anomConf     Any ideas why the fit function uses DensityFunction vs AutoAnomalyDetection parameters, and why the results are different?
@ljvc Thank you for the direction
Ok, Thanks. So I should move all config to the search instead. I have now tried that and the result seems to be the same, still index.   
original query: index=splunk-index   |where  message="start"  |where NOT app IN("ddm", "wwe", "tygmk", "ujhy") |eval day= strftime(_time, "%A") |where _time >= relative_time(_time,  "@d+4h") AND _... See more...
original query: index=splunk-index   |where  message="start"  |where NOT app IN("ddm", "wwe", "tygmk", "ujhy") |eval day= strftime(_time, "%A") |where _time >= relative_time(_time,  "@d+4h") AND _time <= relative_time(_time, "@d+14h") |where NOT day IN("Tuesday", "Wednesday", "Thursday") To suppress my alert, i created a lookup file and added the alert name and holidays dates as shown below: Alert Holidays_Date App Relative Logs Data 8/12/2023 App Relative Logs Data 8/13/2023 App Relative Logs Data 8/14/2023 App Relative Logs Data 8/18/2023   Query with inputlookup holiday list: |inputlook HolidayList.csv |where like(Alert, "App Relative Logs Data") AND Holidays_Date=strftime(now(), "%m/%d/%y") |stats count |eval noholdy=case(count=1, null(), true(), 1) |search  noholdy=1 |fields noholdy |appendcols [search index=splunk-index   |where  message="start"  |where NOT app IN("ddm", "wwe", "tygmk", "ujhy") |eval day= strftime(_time, "%A") |where _time >= relative_time(_time,  "@d+4h") AND _time <= relative_time(_time, "@d+14h") |where NOT day IN("Tuesday", "Wednesday", "Thursday")] When i used this query i am still receiving alert on the dates mentioned in the .csv file. But i don't want to receive  the alerts. is there something wrong in my query, please help 
i have a index and sourcetype index=mmts-app sourcetype=application:logs how do i get a CPU and memory for this query.
Thanks - the pre-9 syntax works but multiple instances of the same repeated log are displayed.... Is there a way to limit to one set of logs?
@ankitarath2011 ,  Agreed, generally it is recommended to instead blacklist items in your search head apps that don't need to be sent to the indexers, as this will keep your bundle sizes down. A few... See more...
@ankitarath2011 ,  Agreed, generally it is recommended to instead blacklist items in your search head apps that don't need to be sent to the indexers, as this will keep your bundle sizes down. A few additional suggestions: The Admins Little Helper for Splunk app can help identify bundle contents (and computed/expected contents) For more information about bundle size, refer to https://docs.splunk.com/Documentation/Splunk/latest/Admin/Distsearchconf   maxBundleSize = <int> * The maximum size (in MB) of the bundle for which replication can occur. If the bundle is larger than this bundle replication will not occur and an error message will be logged. * Defaults to: 2048 (2GB)​   and: Large lookup caused the bundle replication to fail. What are my options Alternatively, you may find large lookups and configure limits.conf accordingly: To find large lookup files check bundles on the indexer in $SPLUNK_HOME/var/run/searchpeers/. Copy one of the bundles from $SPLUNK_HOME/var/run/searchpeers/ over to a tmp dir and run:   tar xvf <file>.bundle find -type f -exec du -h {} \; | grep .csv   Set max_memtable_bytes larger than the largest lookup in limits.conf:   [lookup] max_memtable_bytes = 2*<size of the largest lookup> example: On all indexers in limits.conf set: (for unclustered indexers: $SPLUNK_HOME/etc/system/local/limits.conf; for clustered indexers, use the _cluster app on the Cluster Manager or if you distribute indexes.conf and other indexer settings in a custom app, place it in there) [lookup] max_memtable_bytes = 135000000 #equals to 135MB, where the largest lookup file was 120MB  
Hi All, greetings for the day! my manager asked me to create the usecase but I am new to splunk and know the basics of splunk. 1. so please guide me where to start and end to create the usecas... See more...
Hi All, greetings for the day! my manager asked me to create the usecase but I am new to splunk and know the basics of splunk. 1. so please guide me where to start and end to create the usecase. 2. is there any community for creating the usecasae. Thanks, Jana.P
Ha! Good to know about the makeresults. I didn't know that.