All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@991423214 - There aren't a lot of details in your error messages. But below line suggest there could possibly be a connectivity issue between your machine and Splunk. At C:\Users\myusername\OneDr... See more...
@991423214 - There aren't a lot of details in your error messages. But below line suggest there could possibly be a connectivity issue between your machine and Splunk. At C:\Users\myusername\OneDrive\Desktop\Lab3.ps1:29 char:1 Also, I see you are trying to send events via HEC (Http Event Collector). Have you enabled the HEC from Global settings?   I hope this helps!! Kindly upvote if it does!!!
I have the same problem. How did you solve it?
I am trying to send events from my host machine to splunk using HEC. My Function: Invoke-RestMethod -Method Post -Uri $hecUri -Headers @{"Authorization" = "Splunk $hecToken"} -Body $jsonEventData -... See more...
I am trying to send events from my host machine to splunk using HEC. My Function: Invoke-RestMethod -Method Post -Uri $hecUri -Headers @{"Authorization" = "Splunk $hecToken"} -Body $jsonEventData -ContentType "application/json" Error:  Invoke-RestMethod : Unable to connect to the remote server At C:\Users\myusername\OneDrive\Desktop\Lab3.ps1:29 char:1 + Invoke-RestMethod -Method Post -Uri $hecUri -Headers @{"Authorization ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Invoke-RestMethod], WebException + FullyQualifiedErrorId : System.Net.WebException,Microsoft.PowerShell.Commands.InvokeRestMethodCommand
As stated in my original reply, the settings go on indexers and/or heavy forwarders.  You can put them on search heads, but they won't do any good.  Unless, that is, you have a standalone system (com... See more...
As stated in my original reply, the settings go on indexers and/or heavy forwarders.  You can put them on search heads, but they won't do any good.  Unless, that is, you have a standalone system (combined indexer and search head).
Does your data in Splunk contain CPU and memory information, if so, what does the data look and what fields do you currently have extracted and visible as fields? You need to provide more informatio... See more...
Does your data in Splunk contain CPU and memory information, if so, what does the data look and what fields do you currently have extracted and visible as fields? You need to provide more information for someone to be able to help. I am guessing that you want to find CPU and memory information from the data relating to that query, but the question could also be read as what CPU and memory are consumed when you are running the query.  
With MLTK, when looking at accumulated runtime, the outliers are detected cleanly (three out of three spikes), whereas with the anomaly detection app, only two of the three spikes are detected (along... See more...
With MLTK, when looking at accumulated runtime, the outliers are detected cleanly (three out of three spikes), whereas with the anomaly detection app, only two of the three spikes are detected (along with one false positive, even at medium sensitivity). The code generated by the MLTK is as follows -   index=_audit host=XXXXXXXX action=search info=completed | table _time host total_run_time savedsearch_name | eval total_run_time_mins=total_run_time/60 | convert ctime(search_*) | eval savedsearch_name=if(savedsearch_name="","Ad-hoc",savedsearch_name) | search savedsearch_name!="_ACCEL*" AND savedsearch_name!="Ad-hoc" | timechart span=30m median(total_run_time_mins) | eval "atf_hour_of_day"=strftime(_time, "%H"), "atf_day_of_week"=strftime(_time, "%w-%A"), "atf_day_of_month"=strftime(_time, "%e"), "atf_month" = strftime(_time, "%m-%B") | eventstats dc("atf_hour_of_day"),dc("atf_day_of_week"),dc("atf_day_of_month"),dc("atf_month") | eval "atf_hour_of_day"=if('dc(atf_hour_of_day)'<2, null(), 'atf_hour_of_day'),"atf_day_of_week"=if('dc(atf_day_of_week)'<2, null(), 'atf_day_of_week'),"atf_day_of_month"=if('dc(atf_day_of_month)'<2, null(), 'atf_day_of_month'),"atf_month"=if('dc(atf_month)'<2, null(), 'atf_month') | fields - "dc(atf_hour_of_day)","dc(atf_day_of_week)","dc(atf_day_of_month)","dc(atf_month)" | eval "_atf_hour_of_day_copy"=atf_hour_of_day,"_atf_day_of_week_copy"=atf_day_of_week,"_atf_day_of_month_copy"=atf_day_of_month,"_atf_month_copy"=atf_month | fields - "atf_hour_of_day","atf_day_of_week","atf_day_of_month","atf_month" | rename "_atf_hour_of_day_copy" as "atf_hour_of_day","_atf_day_of_week_copy" as "atf_day_of_week","_atf_day_of_month_copy" as "atf_day_of_month","_atf_month_copy" as "atf_month" | fit DensityFunction "median(total_run_time_mins)" by "atf_hour_of_day" dist=expon threshold=0.01 show_density=true show_options="feature_variables,split_by,params" into "_exp_draft_ca4283816029483bb0ebe68319e5c3e7"   And the code generated by the anomaly detection app -   ``` Same data as above ``` | dedup _time | sort 0 _time | table _time XXXX | interpolatemissingvalues value_field="XXXX" | fit AutoAnomalyDetection XXXX job_name=test sensitivity=1 | table _time, XXXX, isOutlier, anomConf     The major code difference is that with MLTK, we use -   | fit DensityFunction "median(total_run_time_mins)" by "atf_hour_of_day" dist=expon threshold=0.01 show_density=true show_options="feature_variables,split_by,params" into "_exp_draft_ca4283816029483bb0ebe68319e5c3e7"   whereas with the anomaly detection app, we use -   | fit AutoAnomalyDetection XXXX job_name=test sensitivity=1 | table _time, XXXX, isOutlier, anomConf     Any ideas why the fit function uses DensityFunction vs AutoAnomalyDetection parameters, and why the results are different?
@ljvc Thank you for the direction
Ok, Thanks. So I should move all config to the search instead. I have now tried that and the result seems to be the same, still index.   
original query: index=splunk-index   |where  message="start"  |where NOT app IN("ddm", "wwe", "tygmk", "ujhy") |eval day= strftime(_time, "%A") |where _time >= relative_time(_time,  "@d+4h") AND _... See more...
original query: index=splunk-index   |where  message="start"  |where NOT app IN("ddm", "wwe", "tygmk", "ujhy") |eval day= strftime(_time, "%A") |where _time >= relative_time(_time,  "@d+4h") AND _time <= relative_time(_time, "@d+14h") |where NOT day IN("Tuesday", "Wednesday", "Thursday") To suppress my alert, i created a lookup file and added the alert name and holidays dates as shown below: Alert Holidays_Date App Relative Logs Data 8/12/2023 App Relative Logs Data 8/13/2023 App Relative Logs Data 8/14/2023 App Relative Logs Data 8/18/2023   Query with inputlookup holiday list: |inputlook HolidayList.csv |where like(Alert, "App Relative Logs Data") AND Holidays_Date=strftime(now(), "%m/%d/%y") |stats count |eval noholdy=case(count=1, null(), true(), 1) |search  noholdy=1 |fields noholdy |appendcols [search index=splunk-index   |where  message="start"  |where NOT app IN("ddm", "wwe", "tygmk", "ujhy") |eval day= strftime(_time, "%A") |where _time >= relative_time(_time,  "@d+4h") AND _time <= relative_time(_time, "@d+14h") |where NOT day IN("Tuesday", "Wednesday", "Thursday")] When i used this query i am still receiving alert on the dates mentioned in the .csv file. But i don't want to receive  the alerts. is there something wrong in my query, please help 
i have a index and sourcetype index=mmts-app sourcetype=application:logs how do i get a CPU and memory for this query.
Thanks - the pre-9 syntax works but multiple instances of the same repeated log are displayed.... Is there a way to limit to one set of logs?
@ankitarath2011 ,  Agreed, generally it is recommended to instead blacklist items in your search head apps that don't need to be sent to the indexers, as this will keep your bundle sizes down. A few... See more...
@ankitarath2011 ,  Agreed, generally it is recommended to instead blacklist items in your search head apps that don't need to be sent to the indexers, as this will keep your bundle sizes down. A few additional suggestions: The Admins Little Helper for Splunk app can help identify bundle contents (and computed/expected contents) For more information about bundle size, refer to https://docs.splunk.com/Documentation/Splunk/latest/Admin/Distsearchconf   maxBundleSize = <int> * The maximum size (in MB) of the bundle for which replication can occur. If the bundle is larger than this bundle replication will not occur and an error message will be logged. * Defaults to: 2048 (2GB)​   and: Large lookup caused the bundle replication to fail. What are my options Alternatively, you may find large lookups and configure limits.conf accordingly: To find large lookup files check bundles on the indexer in $SPLUNK_HOME/var/run/searchpeers/. Copy one of the bundles from $SPLUNK_HOME/var/run/searchpeers/ over to a tmp dir and run:   tar xvf <file>.bundle find -type f -exec du -h {} \; | grep .csv   Set max_memtable_bytes larger than the largest lookup in limits.conf:   [lookup] max_memtable_bytes = 2*<size of the largest lookup> example: On all indexers in limits.conf set: (for unclustered indexers: $SPLUNK_HOME/etc/system/local/limits.conf; for clustered indexers, use the _cluster app on the Cluster Manager or if you distribute indexes.conf and other indexer settings in a custom app, place it in there) [lookup] max_memtable_bytes = 135000000 #equals to 135MB, where the largest lookup file was 120MB  
Hi All, greetings for the day! my manager asked me to create the usecase but I am new to splunk and know the basics of splunk. 1. so please guide me where to start and end to create the usecas... See more...
Hi All, greetings for the day! my manager asked me to create the usecase but I am new to splunk and know the basics of splunk. 1. so please guide me where to start and end to create the usecase. 2. is there any community for creating the usecasae. Thanks, Jana.P
Ha! Good to know about the makeresults. I didn't know that.
Try this pre-9 syntax | makeresults | eval _raw="process,message A,message 0 B,message 0 A,message 1 B,message 1 A,message 2 B,message 2 A,message 1 B,message 3 A,message 2 A,message 1 A,message 2" ... See more...
Try this pre-9 syntax | makeresults | eval _raw="process,message A,message 0 B,message 0 A,message 1 B,message 1 A,message 2 B,message 2 A,message 1 B,message 3 A,message 2 A,message 1 A,message 2" | multikv forceheader=1 | table process,message | eventstats count as repeats by process message | where repeats > 1
Hi @gcusello, Thank you very much for your inputs..!! The query worked perfect for me.
See below - no output from search string...  
Splunk Enterprise Version:8.2.7.1
Hi @BoldKnowsNothin  Did you see any warning/error messages in splunkd.log for file you intially monitored.  log messages in splunkd.log will help to troubleshoot furthur
My dear comrades, I'm facing something unreal. We just deployed application on the host that looks like [monitor://C:\Data\log\*]. Unfortunately we cannot see any entries on splunk. But when I cop... See more...
My dear comrades, I'm facing something unreal. We just deployed application on the host that looks like [monitor://C:\Data\log\*]. Unfortunately we cannot see any entries on splunk. But when I copied some files to another location on host and also we changed application to something like [monitor://C:\Program Files\Data\log\*]. It sends data.  The folders permission etc are all same. Our application is hard coded so we cannot change the path just like this test. Any help will be much appreciated