All Topics

Top

All Topics

Hi, I had blacklisted the "(?:ParentProcessName).+(?:C:\\Program Files\\Windows Defender Advanced Threat Protection\\)" in deployment server and applied it to  one of the windows server how we can t... See more...
Hi, I had blacklisted the "(?:ParentProcessName).+(?:C:\\Program Files\\Windows Defender Advanced Threat Protection\\)" in deployment server and applied it to  one of the windows server how we can trouble shoot whether it is applied or not ?  
Hello Splunkers, I am trying below query -   index=someindex cluster=gw uuid=gw98037234c6e51a48816016172b8a3c56 | eval api_uuid="gw"+reqid | head 1 | append [search index=someindex cluster=api uui... See more...
Hello Splunkers, I am trying below query -   index=someindex cluster=gw uuid=gw98037234c6e51a48816016172b8a3c56 | eval api_uuid="gw"+reqid | head 1 | append [search index=someindex cluster=api uuid=api_uuid]   Basically what I am trying is to get result from first search, evaluate new field from first search and add it as condition to second search. It is not working if I supply api_uuid field but If I replace uuid in append with actual computed value it is returning proper result. I have seen few people using join but dont want to use join as its expensive and comes with limit. Any solution to above query ?
Hi, i want to list out all the hostname in my tipwire log. but my hostname field are as below: Hostname 10.10.10.10 : Host A 192.0.0.0 : Host B My hostname and ip are mixed and in the same field... See more...
Hi, i want to list out all the hostname in my tipwire log. but my hostname field are as below: Hostname 10.10.10.10 : Host A 192.0.0.0 : Host B My hostname and ip are mixed and in the same field. How do i split the hostname, IP and list out all the hostname only. Please assist me on this. Thank you
I'm using Splunk to collect the state of Microsoft IIS web server app pools. I've noticed that when the Universal Forwarder collects Perfmon data that has instance names with spaces in, and when inge... See more...
I'm using Splunk to collect the state of Microsoft IIS web server app pools. I've noticed that when the Universal Forwarder collects Perfmon data that has instance names with spaces in, and when ingested into a Metrics index, that the instance name after the first space is lost. But this doesn't happen if I ingested into a normal index. Here is my configuration in the inputs.conf file: [perfmon://IISAppPoolState] interval = 10 object = APP_POOL_WAS counters = Current Application Pool State instances = * disabled = 0 index = metrics_index mode=single sourcetype = perfmon:IISAppPoolState It is on a machine which has IIS pools which have spaces in there names - ie "company website" "company portal" "HR web" When this data is ingested into the metrics index and accessed via the following Splunk command: | mstats latest(_value) as IISAppPoolState WHERE index=metrics_index metric_name="IISAppPoolState.Current Application Pool State" by instance, host I end up with instance values that truncate at the first space. So "company website" becomes just "company" (and who knows what happens to "company portal"). However if I direct the data into a normal index the instance names are wrapped in quotes and the space in the instance name persevered. Is there anyway to fix this behaviour? Collecting this data into a metrics index has worked fine until now but thanks to this server having IIS site names with spaces in them it's causing a real problem.   Thanks for your thoughts! Eddie
I am trying to send events from my host machine to splunk using HEC. My Function: Invoke-RestMethod -Method Post -Uri $hecUri -Headers @{"Authorization" = "Splunk $hecToken"} -Body $jsonEventData -... See more...
I am trying to send events from my host machine to splunk using HEC. My Function: Invoke-RestMethod -Method Post -Uri $hecUri -Headers @{"Authorization" = "Splunk $hecToken"} -Body $jsonEventData -ContentType "application/json" Error:  Invoke-RestMethod : Unable to connect to the remote server At C:\Users\myusername\OneDrive\Desktop\Lab3.ps1:29 char:1 + Invoke-RestMethod -Method Post -Uri $hecUri -Headers @{"Authorization ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Invoke-RestMethod], WebException + FullyQualifiedErrorId : System.Net.WebException,Microsoft.PowerShell.Commands.InvokeRestMethodCommand
With MLTK, when looking at accumulated runtime, the outliers are detected cleanly (three out of three spikes), whereas with the anomaly detection app, only two of the three spikes are detected (along... See more...
With MLTK, when looking at accumulated runtime, the outliers are detected cleanly (three out of three spikes), whereas with the anomaly detection app, only two of the three spikes are detected (along with one false positive, even at medium sensitivity). The code generated by the MLTK is as follows -   index=_audit host=XXXXXXXX action=search info=completed | table _time host total_run_time savedsearch_name | eval total_run_time_mins=total_run_time/60 | convert ctime(search_*) | eval savedsearch_name=if(savedsearch_name="","Ad-hoc",savedsearch_name) | search savedsearch_name!="_ACCEL*" AND savedsearch_name!="Ad-hoc" | timechart span=30m median(total_run_time_mins) | eval "atf_hour_of_day"=strftime(_time, "%H"), "atf_day_of_week"=strftime(_time, "%w-%A"), "atf_day_of_month"=strftime(_time, "%e"), "atf_month" = strftime(_time, "%m-%B") | eventstats dc("atf_hour_of_day"),dc("atf_day_of_week"),dc("atf_day_of_month"),dc("atf_month") | eval "atf_hour_of_day"=if('dc(atf_hour_of_day)'<2, null(), 'atf_hour_of_day'),"atf_day_of_week"=if('dc(atf_day_of_week)'<2, null(), 'atf_day_of_week'),"atf_day_of_month"=if('dc(atf_day_of_month)'<2, null(), 'atf_day_of_month'),"atf_month"=if('dc(atf_month)'<2, null(), 'atf_month') | fields - "dc(atf_hour_of_day)","dc(atf_day_of_week)","dc(atf_day_of_month)","dc(atf_month)" | eval "_atf_hour_of_day_copy"=atf_hour_of_day,"_atf_day_of_week_copy"=atf_day_of_week,"_atf_day_of_month_copy"=atf_day_of_month,"_atf_month_copy"=atf_month | fields - "atf_hour_of_day","atf_day_of_week","atf_day_of_month","atf_month" | rename "_atf_hour_of_day_copy" as "atf_hour_of_day","_atf_day_of_week_copy" as "atf_day_of_week","_atf_day_of_month_copy" as "atf_day_of_month","_atf_month_copy" as "atf_month" | fit DensityFunction "median(total_run_time_mins)" by "atf_hour_of_day" dist=expon threshold=0.01 show_density=true show_options="feature_variables,split_by,params" into "_exp_draft_ca4283816029483bb0ebe68319e5c3e7"   And the code generated by the anomaly detection app -   ``` Same data as above ``` | dedup _time | sort 0 _time | table _time XXXX | interpolatemissingvalues value_field="XXXX" | fit AutoAnomalyDetection XXXX job_name=test sensitivity=1 | table _time, XXXX, isOutlier, anomConf     The major code difference is that with MLTK, we use -   | fit DensityFunction "median(total_run_time_mins)" by "atf_hour_of_day" dist=expon threshold=0.01 show_density=true show_options="feature_variables,split_by,params" into "_exp_draft_ca4283816029483bb0ebe68319e5c3e7"   whereas with the anomaly detection app, we use -   | fit AutoAnomalyDetection XXXX job_name=test sensitivity=1 | table _time, XXXX, isOutlier, anomConf     Any ideas why the fit function uses DensityFunction vs AutoAnomalyDetection parameters, and why the results are different?
original query: index=splunk-index   |where  message="start"  |where NOT app IN("ddm", "wwe", "tygmk", "ujhy") |eval day= strftime(_time, "%A") |where _time >= relative_time(_time,  "@d+4h") AND _... See more...
original query: index=splunk-index   |where  message="start"  |where NOT app IN("ddm", "wwe", "tygmk", "ujhy") |eval day= strftime(_time, "%A") |where _time >= relative_time(_time,  "@d+4h") AND _time <= relative_time(_time, "@d+14h") |where NOT day IN("Tuesday", "Wednesday", "Thursday") To suppress my alert, i created a lookup file and added the alert name and holidays dates as shown below: Alert Holidays_Date App Relative Logs Data 8/12/2023 App Relative Logs Data 8/13/2023 App Relative Logs Data 8/14/2023 App Relative Logs Data 8/18/2023   Query with inputlookup holiday list: |inputlook HolidayList.csv |where like(Alert, "App Relative Logs Data") AND Holidays_Date=strftime(now(), "%m/%d/%y") |stats count |eval noholdy=case(count=1, null(), true(), 1) |search  noholdy=1 |fields noholdy |appendcols [search index=splunk-index   |where  message="start"  |where NOT app IN("ddm", "wwe", "tygmk", "ujhy") |eval day= strftime(_time, "%A") |where _time >= relative_time(_time,  "@d+4h") AND _time <= relative_time(_time, "@d+14h") |where NOT day IN("Tuesday", "Wednesday", "Thursday")] When i used this query i am still receiving alert on the dates mentioned in the .csv file. But i don't want to receive  the alerts. is there something wrong in my query, please help 
i have a index and sourcetype index=mmts-app sourcetype=application:logs how do i get a CPU and memory for this query.
Hi All, greetings for the day! my manager asked me to create the usecase but I am new to splunk and know the basics of splunk. 1. so please guide me where to start and end to create the usecas... See more...
Hi All, greetings for the day! my manager asked me to create the usecase but I am new to splunk and know the basics of splunk. 1. so please guide me where to start and end to create the usecase. 2. is there any community for creating the usecasae. Thanks, Jana.P
My dear comrades, I'm facing something unreal. We just deployed application on the host that looks like [monitor://C:\Data\log\*]. Unfortunately we cannot see any entries on splunk. But when I cop... See more...
My dear comrades, I'm facing something unreal. We just deployed application on the host that looks like [monitor://C:\Data\log\*]. Unfortunately we cannot see any entries on splunk. But when I copied some files to another location on host and also we changed application to something like [monitor://C:\Program Files\Data\log\*]. It sends data.  The folders permission etc are all same. Our application is hard coded so we cannot change the path just like this test. Any help will be much appreciated
Hi All, I have got two queries to populate the host, region, tech stack & environment details. One query is a lookup table that has the list of total number of host.    | inputlookup Master_List.c... See more...
Hi All, I have got two queries to populate the host, region, tech stack & environment details. One query is a lookup table that has the list of total number of host.    | inputlookup Master_List.csv | search Region="Asia" | search "Tech Stack"="Apple" | rename host as Total_Servers | table Total_Servers   which gives below table: Total_Servers Apple1 Apple2 Apple3 Apple4 Apple5 Apple6 The second query gives us the list of hosts that are currently populating in splunk.   ... | rex field=_raw "(?ms)]\|(?P<host>\w+\-\w+)\|" | rex field=_raw "(?ms)]\|(?P<host>\w+)\|" | rex field=_raw "\]\,(?P<host>[^\,]+)\," | rex field=_raw "\]\|(?P<host>[^\|]+)\|" | regex _raw!="^\d+(\.\d+){0,2}\w" | regex _raw!="/apps/tibco/datastore" | lookup Master_List.csv "host" | search "Tech Stack"="Apple" | search Region="Asia" | rename host as "Reporting_Servers" | table "Reporting_Servers"   which gives below table: Reporting_Servers Apple1 Apple4 Apple5 Now I want to create a query to compare these two table and populate the servers that are missing from the total servers. So that output of the above tables after comparison should be like below: Non_Reporting_Servers Apple2 Apple3 Apple6   Please help me to create a query to achieve the expected output table. Your kind inputs are highly appreciated.   Thank You..!!!
To all the python masters out there :: Python Execution Time Optimization using multi-threading. I have a python script which takes a list of 1000 IP from a file and does port monitoring 3389 and 22... See more...
To all the python masters out there :: Python Execution Time Optimization using multi-threading. I have a python script which takes a list of 1000 IP from a file and does port monitoring 3389 and 22 respectively using the os module of python . It is taking as of now 40 minutes to run. The requirement is to run the same scripted input within 10 minutes.   I have tried multi threading but the output is not sequential so I am not able to ingest...
i have a query where i am looking for multiple values with OR and then counting the occurrence with the stats the query is something like this  index=**** ("value1") OR ("Value3") OR ... | stats ... See more...
i have a query where i am looking for multiple values with OR and then counting the occurrence with the stats the query is something like this  index=**** ("value1") OR ("Value3") OR ... | stats count(eval(searchmatch("vlaue1"))) as value1, count(eval(searchmatch("vlaue2"))) as value2 now I just want to collect only those values which are found which mean there count is greater than 0. How can I achieve this where only stats of the values are displayed which are found in the events   also search values are mostly ips, URLs , domains, etc Note: I'm making this query for dashboard
Hello, I have a list of IPs generated from the following search : index=<source>| stats count by ip and I want to identify IPs that do not belong to any of the IP address ranges in my results. Exa... See more...
Hello, I have a list of IPs generated from the following search : index=<source>| stats count by ip and I want to identify IPs that do not belong to any of the IP address ranges in my results. Example :   a.b.c.101 a.b.c.102 a.b.c.103 d.e.f.g a.b.c.104 I want to keep only the address d.e.f.g Thank in advance for your help Regards,
Hello to all dear friends and fellow platformers I have 36 indexers and 7 heavy forwarders in my cluster. Every once in a while, I notice that one of the equipments that I receive logs from is not e... See more...
Hello to all dear friends and fellow platformers I have 36 indexers and 7 heavy forwarders in my cluster. Every once in a while, I notice that one of the equipments that I receive logs from is not entered into Splunk, and the log is actually reported from the source, but with further investigations, I realize that the log From the source means that the desired equipment is sent and received in one of the 7 HF, but the problem is that either the HF does not send to the indexers or the indexers do not index the log, so according to the Splunk system, the log is disconnected from the source of the equipment? a. Do you have a solution so that in the scenario of indexer clustering and a large number of HFs, I can find out whether the log is correctly outputted from the HF to the indexer or not? B. What is the cause and solution of this problem? THank you.
Hello Splunk Community, I'm encountering an issue with a custom app I've developed for Splunk. The app is designed to collect and analyze data from various sources, and it has been working perfectly... See more...
Hello Splunk Community, I'm encountering an issue with a custom app I've developed for Splunk. The app is designed to collect and analyze data from various sources, and it has been working perfectly until recently. However, after a recent update to Splunk, I've noticed that some of my custom data inputs are not functioning as expected. Specifically, I've configured data inputs using the modular input framework, and these inputs were collecting data without any problems. Now, I'm seeing errors in the logs related to these inputs, and data ingestion has become inconsistent. Has anyone else experienced similar issues with modular inputs after a Splunk update? Are there any known compatibility issues or changes in the latest Splunk version that might affect custom data inputs? I'd appreciate any insights or suggestions on how to troubleshoot and resolve this problem. Thanks in advance!
Hi Team, Can it be possible to include 'update email' actions in "MS Graph for Office 365" SOAR App (similar to  "EWS for Office 365" App)? Thank you, CK
Hello comrades, We are using universal forwarder on hosts. And we have a noisy dude that products EventID:4674, and exceeds our license limit. Can we shut this dude's mouth on agent side but only on... See more...
Hello comrades, We are using universal forwarder on hosts. And we have a noisy dude that products EventID:4674, and exceeds our license limit. Can we shut this dude's mouth on agent side but only on eventID:4674? Sorry newbie here. Many thanks,
Is it possible to run different filter in an index search based on a condition in dropdown below? The second filter works for both ipv4 and ipv6, but it is slowing down the search.  I don't want ipv... See more...
Is it possible to run different filter in an index search based on a condition in dropdown below? The second filter works for both ipv4 and ipv6, but it is slowing down the search.  I don't want ipv4 going through my filter for ipv6. Thanks If select IPv4 dropdown box > select 1.1.1.1 ip_token=1.1.1.1 Search: | index=vulnerability_index ip="$ip_token$" if select IPv6 dropdown box > select  2001:db8:3333:4444:5555:6666::2101 ip_token=2001:db8:3333:4444:5555:6666::2101 Search: | index=vulnerability_index | rex mode=sed field=ip "s/<regex>/<replacement>/<flags>" | search ip="$ip_token$"
Hello Splunk community, I have an issue with a Splunk Deployment Server where FS /var is of size 30Gb and currently 22G are being used by the log "uncategorised.log" under the path /var/log/syslog. ... See more...
Hello Splunk community, I have an issue with a Splunk Deployment Server where FS /var is of size 30Gb and currently 22G are being used by the log "uncategorised.log" under the path /var/log/syslog. Is it viable/possible to delete that log or make a backup of it to a tape or a different server.?