All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks @yuanliu for your quick response. I am totally unaware how to achieve this by creating a custom command.
Before I tried at end my query as  |fillnull value=0  which did not work  now I tried  fillnull App1 App2 App2 App4 its working 
Thanks for your answer, I guess I need to provide more detail. This is a windows 11 client and Windows Server 2012 System. I was not able to find an event ID for this activity in event manager.  
Splunk will report on the data it has, so you have to first identify which logs or other data sources contain the data that shows data being copied to a removable media - so this is something you hav... See more...
Splunk will report on the data it has, so you have to first identify which logs or other data sources contain the data that shows data being copied to a removable media - so this is something you have to find out based on your systems. Once you have this knowledge of where the data is you will need to ingest the data source into Splunk, extract fields and use them to report via Splunk.
Thank you @yuanliu ! This worked really well.  I added my eval commands to it as well and was able to produce the table that I was seeking, with your great query as a guide.   I've expanded the time ... See more...
Thank you @yuanliu ! This worked really well.  I added my eval commands to it as well and was able to produce the table that I was seeking, with your great query as a guide.   I've expanded the time range to 14 days bc I realized 7 days was a little pointless since most of my batches only run M-F.  My final query ended up being:  index=*app_pcf cf_app_name="mddr-batch-integration-flow" "posbatch04" earliest=-14d@d latest=-0d@d | eval dayback = mvrange(0, 14) | eval day = mvmap(dayback, if(_time < relative_time(now(), "-" . dayback . "d@day") AND relative_time(now(), "-" . tostring(dayback + 1) . "d@day") < _time, dayback, null())) | stats min(_time) as Earliest max(_time) as Latest by day | fieldformat Earliest = strftime(Earliest, "%F %T") | fieldformat Latest = strftime(Latest, "%F %T") | eval day = "day -" . tostring(day + 1) | eval Elapsed_Time=Latest-Earliest, Start_Time_Std=strftime(Earliest,"%H:%M:%S:%Y-%m-%d"), End_Time_Std=strftime(Latest,"%H:%M:%S:%Y-%m-%d") | eval Elapsed_Time=Elapsed_Time/60 Lastly I will figure out how to organize this by Day in desc order; right now it is sorting the results by another column...  Much appreciated for the help and the fast response, I would have never figured this out  
The fillnull command does not support wildcards.  Try using foreach as a wrapper around fillnull. | foreach 20* [fillnull '<<FIELD>>' value="0.0"]  
Hi @Razzi, you should define the fields that you can use to identify the fields to use: host (it's the host present in each log), process. Then you should create a lookup (called e.g. perimeter... See more...
Hi @Razzi, you should define the fields that you can use to identify the fields to use: host (it's the host present in each log), process. Then you should create a lookup (called e.g. perimeter.csv) containing the hosts to monitor (supponing that the three services to monitor must be active in all the servers). Then you should run a search like the following: index=<your_index> process IN (TBD1, TBD2, TBD3) | stats dc(process) AS process_count values(process) AS process count BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats dc(process) AS process_count values(process) AS process sum(count) AS total BY host | where total=0 OR process_count<3 | eval status=if(total=0, "missed host", "missed process") | table host status process | rename process AS "present processes" Ciao. Giuseppe
I am fairly new to the Splunk platform/ community; I am in learning mode and I hope to get some help here. How do I set up/configure an alert on a set of Windows Servers to notify me when a partic... See more...
I am fairly new to the Splunk platform/ community; I am in learning mode and I hope to get some help here. How do I set up/configure an alert on a set of Windows Servers to notify me when a particular set of services stops? For example, I have three services that start with the naming of TDB, how can I configure Splunk to alert if any of those services stop on a particular server name. Thanks much.
The "no file found" message is excluded in the base search. index=mulesoft environment=* applicationName IN ("processor","api") message!="No files found for*"
you can use the "windowstats" command to achieve your goal.   first download the windowstats app from here: https://splunkbase.splunk.com/app/7329     your query | windowstats field=<field nam... See more...
you can use the "windowstats" command to achieve your goal.   first download the windowstats app from here: https://splunkbase.splunk.com/app/7329     your query | windowstats field=<field name> window=4 function=avg style=gradual    OR   your query | windowstats field=<field name> window=4 function=avg style=dynamic       the difference between gradual and dynamic is how the window will be on the edges.   when t=0 (first element) and window size is 4 ( window=4 means 4 without counting the middle value (total window size will be 5))) gradual will be    x(t), x(t+1), x(t+2), x(t+3), x(t+4)     dynamic will be:   x(t), x(t+1), x(t+2)     when t=size (last element) and window size is 4 ( window=4 means 4 without counting the middle value (total window size will be 5))) gradual will be    x(t-4), x(t-3), x(t-2), x(t-1), x(t)     dynamic will be:   x(t-2), x(t-1), x(t)     both dynamic and gradual work in the same way in the middle values.      Happy Splunking! 
My search ends with:   | table Afdeling 20* Voorlaatste* Laatste* verschil   It has several detail rows and 1 row with totals. I want to use fillnull for the totals for the 20* columns (2023-10, ... See more...
My search ends with:   | table Afdeling 20* Voorlaatste* Laatste* verschil   It has several detail rows and 1 row with totals. I want to use fillnull for the totals for the 20* columns (2023-10, 2023-11 etc.) but not for Voorlaatste* Laatste* and verschil.  I can't use    | fillnull 20* value="0.0"   because that adds a column "20*" and I don't want to use fillnull 2023-10 etc. Is there a way to do this?
I just want to exclude the message contains "No files found" .If the keywords contains No files found .We dont to want to show the particular transactions.Saerch command which is used in last for the... See more...
I just want to exclude the message contains "No files found" .If the keywords contains No files found .We dont to want to show the particular transactions.Saerch command which is used in last for the values from dropdown from dashboard values .So i used search interfacename in last.
I don't understand what is meant by "mvfind is not for interfacename".  The mvfind function can be used with any multi-value field (InterfaceName is multi-valued since it is created by the values fun... See more...
I don't understand what is meant by "mvfind is not for interfacename".  The mvfind function can be used with any multi-value field (InterfaceName is multi-valued since it is created by the values function). The mvfind function can be used with multiple values in a regular expression.   | where isnotnull(mvfind(InterfaceName("ABC|ABCD|COP"))  
Actually i am using multiple values in interfacename.And mvfind is not for interfacename . | search InterfaceName IN ( "ABC", "ABCD", "COP")
Hi Follow all the steps and plan your upgrade from the below link, there many steps and processes to follow. Most important always Read the README for the version you upgrading. https://docs.s... See more...
Hi Follow all the steps and plan your upgrade from the below link, there many steps and processes to follow. Most important always Read the README for the version you upgrading. https://docs.splunk.com/Documentation/Splunk/9.0.1/Installation/HowtoupgradeSplunk
I dont know about the exact postman config for filtering, but via CLI you can test the below first and assuming you can use a Linux syste. . For the API call its seems to be called name and not ti... See more...
I dont know about the exact postman config for filtering, but via CLI you can test the below first and assuming you can use a Linux syste. . For the API call its seems to be called name and not title as I have noticed, this is difference between | rest and calling the API. (dont know why this is...) Further more If you install the jq command it’s a json processer command, it will help with the two fields you want, if not remove from my command below. You will need a token created in Splunk. See my example below curl -k -H "Authorization: Bearer <YOUR TOKEN>" https://*****:8089/servicesNS/-/-/admin/macros --get -d output_mode=json | jq '.entry[] | {name: .name, definition: .content.definition}' This should give you the results for the name of the macro and its defintion, optionally output to a json file
The forwarders (UF) or HF have inbuilt functionality to send the data to both, so as long as you configure in the outputs.conf, the group names of the servers. See the section "Configure load balanc... See more...
The forwarders (UF) or HF have inbuilt functionality to send the data to both, so as long as you configure in the outputs.conf, the group names of the servers. See the section "Configure load balancing on a universal forwarder with outputs.conf" https://docs.splunk.com/Documentation/Forwarder/9.2.1/Forwarder/Configureforwardingwithoutputs.conf See this document for how autoload balancing works https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Forwarding/Setuploadbalancingd
And in the case of installing and configuring a forwarder to send collected data to the indexers, if the indexers are not in a cluster, is it possible to configure it to send data to both indexers si... See more...
And in the case of installing and configuring a forwarder to send collected data to the indexers, if the indexers are not in a cluster, is it possible to configure it to send data to both indexers simultaneously ? 
Thank you for your assistance and your response