All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I am sending logs without indexing on Splunk to another product by using the "SYSLOG_ROUTING" DEST_KEY on the transform.conf file. Looking at the documentation of "How Splunk licensing works", ... See more...
Hi, I am sending logs without indexing on Splunk to another product by using the "SYSLOG_ROUTING" DEST_KEY on the transform.conf file. Looking at the documentation of "How Splunk licensing works",  it says: "When ingesting event data, the measured data volume is based on the raw data that is placed into the indexing pipeline." By looking on the monitor console I realized that the indexer pipeline is made by: syslog out, tcp out and indexer lines, so it seems that by using syslog_routing dest key I could also consume Splunk license. Can you confirm this? Kind Regards, Angelo      are those
Hey All I've configured tcp-ssl on HF, created certificates and the following configuration. The HF receive syslog from third-party, I'll send the third party company the CA (combined certificat... See more...
Hey All I've configured tcp-ssl on HF, created certificates and the following configuration. The HF receive syslog from third-party, I'll send the third party company the CA (combined certificat) I created based on these docs: 1. How to create and sign your own TLS certificates  2. Create a single combined certificate file  inputs.conf [tcp-ssl://2222] index = test sourcetype = st_test [SSL] serverCert = C:\Program Files\Splunk\etc\auth\mycerts\myServerCertificate.pem sslPassword = <Server.key password> sslRootCAPath = C:\Program Files\Splunk\etc\auth\mycerts\myCertAuthCertificate.pem Server.conf [sslconfig] sslPassword = <password encrypted that I didn't configured> And yet Splunk isn't listening to the requested port for example 2222 What am I missing? The error I get in Splunk _internal is: SSL context not found. Will not open raw (SSL) IPv4 port 2222 Please assist, and Thank YOU!!!  
the "unwanted logs" is a very vague term and the ES app definitely got no app to monitor this vagueness.  as said by above reply, you should fine-tune what to ingest and what not to ingest(and send ... See more...
the "unwanted logs" is a very vague term and the ES app definitely got no app to monitor this vagueness.  as said by above reply, you should fine-tune what to ingest and what not to ingest(and send it to null-queue). 
Ok, now i see and get the data.    thanks!
this should be tested. maybe i will give it a try today evening.  well, looks like the developers made some basic errors.. even if we raise support ticket for this, splunk would consider this as low... See more...
this should be tested. maybe i will give it a try today evening.  well, looks like the developers made some basic errors.. even if we raise support ticket for this, splunk would consider this as low priority ticket.   
now I see  first I missed double quote, so you are correct - so my search is:  | mstats avg("value1") prestats=true WHERE "index"="my_index" span=10s BY host | timechart avg("value1") span=10s use... See more...
now I see  first I missed double quote, so you are correct - so my search is:  | mstats avg("value1") prestats=true WHERE "index"="my_index" span=10s BY host | timechart avg("value1") span=10s useother=false BY host WHERE max in top5   now, if I want to search by what you worth: | mstats avg("value1") prestats=true WHERE "index"="my_index" span=10s BY host | untable _time host average | stats dc(host) as c_host    OR | timechart avg("value1") span=10s useother=false BY host WHERE max in top5 | untable _time host average | stats dc(host) as c_host     Anyway, I want to use mstats function and get a count for the host. 
Short answer - no, there isn't. Longer answer - but you could write an app to do it.   Assuming that you don't care for events of the sourcetype foo, you could set up an app on your indexers that ... See more...
Short answer - no, there isn't. Longer answer - but you could write an app to do it.   Assuming that you don't care for events of the sourcetype foo, you could set up an app on your indexers that looks like: props.conf [foo] TRANSFORM-route_to_nullQueue = route_to_nullQueue transforms.conf [route_to_nullQueue] INGEST_EVAL = queue := "nullQueue"   Once active, events of sourcetype foo won't be ingested and won't count towards your daily ingest license.
three ideas... 1) As i heard, the Splunk Essentials app got some sample data.  https://splunkbase.splunk.com/app/3435 2) and then you can find some sample data in this repo: https://github.com/sp... See more...
three ideas... 1) As i heard, the Splunk Essentials app got some sample data.  https://splunkbase.splunk.com/app/3435 2) and then you can find some sample data in this repo: https://github.com/splunk/botsv3 3) and then, there is an app.. EventGen. very difficult to configure and very worst documentation. i would suggest this as last resort. thanks. 
Hi, I have the same issue. We have SHs anss IDx cluster. Only overview dashboard is empty.
The app "Splunk App for Fraud Analytics" introduced that we "can download and install test data from here. Please consider that using test data can use up to 7 GB and will take 10-30 minutes for the ... See more...
The app "Splunk App for Fraud Analytics" introduced that we "can download and install test data from here. Please consider that using test data can use up to 7 GB and will take 10-30 minutes for the test data to initialize correctly". But I did not find any test data attached.
You can use conditions in drilldowns to determine which field/column has been clicked <drilldown> <condition field="Name"> <link target="_blank">your search link</link>... See more...
You can use conditions in drilldowns to determine which field/column has been clicked <drilldown> <condition field="Name"> <link target="_blank">your search link</link> </condition> <condition field="Organization"> <link target="_blank">your organisation link</link> </condition> </drilldown>
| stats latest(value_sum) ,latest(value_cnt) by key
| mstats avg("value1) prestats=true WHERE "index"="my_index" span=10s BY host has a missing double quote so will give you an error Also, assuming that this is corrected, you will get a field called... See more...
| mstats avg("value1) prestats=true WHERE "index"="my_index" span=10s BY host has a missing double quote so will give you an error Also, assuming that this is corrected, you will get a field called something like "avg(value1)" This means that you no longer have a field called "value1" so the timechart command has no field to do an average on. This is why the search you provided does not work. Assuming it is the timechart table that you want to count hosts for, you could untable the chart table | untable _time host average | stats dc(host) as c_host | where c_host < 3  
Sorry i forget to write very important things I have many events with the same key  for example: 10/4/23 1:23:03.000 PM   {"key":"27.09.2023","value_sum":35476232.82,"value_cnt":2338} Show... See more...
Sorry i forget to write very important things I have many events with the same key  for example: 10/4/23 1:23:03.000 PM   {"key":"27.09.2023","value_sum":35476232.82,"value_cnt":2338} Show syntax highlighted host = app-damu.hcb.kz source = /opt/splunkforwarder/etc/apps/XXX/pays_7d.sh sourcetype = damu_pays_7d 10/4/23 1:24:03.000 PM {"key":"27.09.2023","value_sum":35476232.82,"value_cnt":2338} Show syntax highlighted host = app-damu.hcb.kz source = /opt/splunkforwarder/etc/apps/XXX/pays_7d.sh sourcetype = damu_pays_7d 10/4/23 1:25:03.000 PM {"key":"27.09.2023","value_sum":35476232.82,"value_cnt":2338} Show syntax highlighted host = app-damu.hcb.kz source = /opt/splunkforwarder/etc/apps/XXX/pays_7d.sh sourcetype = damu_pays_7d   ... and for other KEY  for example key":"29.09.2023" many events   in the result i want to see only one unique row for KEY I try to search  index=hcg_app_damu_prod sourcetype="damu_pays_7d" | spath input=json | table _time, key ,value_sum, value_cnt | stats latest(key), latest(value_sum) ,latest(value_cnt)   but it give me just only one row.  
Hi all, I successfully forward data from Windows using the command msiexec.exe /i splunkuniversalforwarder_x86.msi RECEIVING_INDEXER="indexer1:9997" WINEVENTLOG_SEC_ENABLE=1 WINEVENTLOG_SYS_ENABLE=... See more...
Hi all, I successfully forward data from Windows using the command msiexec.exe /i splunkuniversalforwarder_x86.msi RECEIVING_INDEXER="indexer1:9997" WINEVENTLOG_SEC_ENABLE=1 WINEVENTLOG_SYS_ENABLE=1 AGREETOLICENSE=Yes /quiet from Install a Windows universal forwarder . The same for Linux with the command ./splunk add monitor /var/log from Configure the universal forwarder using configuration files . Both works fine and I can see the hosts in the Data Summary as visible in the following figure. Data Summary If I instead set up the input in the local "inputs.conf" file after basic installation like [perfmon://LocalPhysicalDisk] interval = 10 object = PhysicalDisk counters = Disk Bytes/sec; % Disk Read Time; % Disk Write Time; % Disk Time instances = * disabled = 0 index = winfwtestinger for example and assign a specific index, I can see that data is ingested if I search for the specific index but they will not appear in the Data Summary. I would be very happy about any suggestion what I am doing wrong here.   Best regards
not sure why you say that. but its working. just to be clear = value1 = to some internal parameter. index = my index. and base on that Im getting information about  the hosts .  now I just want t... See more...
not sure why you say that. but its working. just to be clear = value1 = to some internal parameter. index = my index. and base on that Im getting information about  the hosts .  now I just want to count how many hosts reporting, when its less then 3 I want to trigger about it.  hope its clear now. 
I have checked internal events only logging for 5 of them are in the _internal  I have  windows eventlog for Dc2 and 5 on the Wineventlog I will check aging the UF to see where is the problem Much... See more...
I have checked internal events only logging for 5 of them are in the _internal  I have  windows eventlog for Dc2 and 5 on the Wineventlog I will check aging the UF to see where is the problem Much appreciate your help @PickleRick   
Hello When I run a search i have the message "could not load lookup" with different lookup name For example : Could not load lookup=LOOKUP-Kerberosfailurecode Could not load lookup=LOOKUP-Kerbero... See more...
Hello When I run a search i have the message "could not load lookup" with different lookup name For example : Could not load lookup=LOOKUP-Kerberosfailurecode Could not load lookup=LOOKUP-Kerberosresultcode Could not load lookup=LOOKUP-syscall I had a look in the lookup definition menu and I can see that some lookup are referenced to my splunk apps even if i dont use these lookups in my apps! But i can change the name of the apps Is it possible to change it? Moreover, some lookup like "syscall" doesnt exists in my lookup definition menu so how to solve this issue please?  
Do you already have the fields extracted, or are you asking how to extract the fields so you can use them in a table?
This is rather confusing.  It seems that Splunk already gives you fields "key", "value_sum", and "value_cnt".  You want to rename "key" as "day", "value_sum" as "sum", "value_cnt" as "cnt".  Is that ... See more...
This is rather confusing.  It seems that Splunk already gives you fields "key", "value_sum", and "value_cnt".  You want to rename "key" as "day", "value_sum" as "sum", "value_cnt" as "cnt".  Is that all?  Are you just looking for rename? | rename "key" as "day", "value_sum" as "sum", "value_cnt" as "cnt" | table day sum cnt Something like that.