All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, i Have this query that i want to improve | loadjob savedsearch="myquery" | where (strftime(_time, "%Y-%m-%d") = "2020-02-27") | stats dc(eval(if(STEP="Sent",ID_MESSAGE,NULL))) AS sent,... See more...
Hello, i Have this query that i want to improve | loadjob savedsearch="myquery" | where (strftime(_time, "%Y-%m-%d") = "2020-02-27") | stats dc(eval(if(STEP="Sent",ID_MESSAGE,NULL))) AS sent, dc(eval(if(STEP="RME",ID_MESSAGE,NULL))) AS RME by ID_MESSAGE | join type=left ID_MESSAGE [| loadjob savedsearch="myquery" | where (strftime(_time, "%Y-%m-%d") >= "2020-02-27") AND (strftime(_time, "%Y-%m-%d") <= "2020-03-03") |stats dc(eval(if(STEP="Click",ID_MESSAGE,NULL))) AS click by ID_MESSAGE] |where sent>0 |stats sum(sent) AS sent,sum(RME) AS RME,sum(click) AS click This is the result that i have today And this is the result that i want to have knowing that RME and click are calculated on the ID_MESSAGE sent
Is it possible to import a lot of IP addresses into a lookup list and search the lookup list without assigning the addresses to a specific field? Basically I want to search from the lookup to act ... See more...
Is it possible to import a lot of IP addresses into a lookup list and search the lookup list without assigning the addresses to a specific field? Basically I want to search from the lookup to act as if as I just typed 10.X.X.X into the search bar. I know this isnt the best way to do it but right now I want to be able to search for that IP accross all indexes and each index have different field names for it
I want to create a Splunk dashboard for 100 subnets in the CSV file. But not able to use this CSV file in my Splunk query. Example: CSV file: Subnets.csv Simple splunk query to generate result... See more...
I want to create a Splunk dashboard for 100 subnets in the CSV file. But not able to use this CSV file in my Splunk query. Example: CSV file: Subnets.csv Simple splunk query to generate results without csv file for only 3 subnets: index=firewall (IP="10.10.10.*" OR IP="10.10.20.*" OR IP="100.100.20.*") (Status=allow) | stats count(IP) by Status How can I use Subnets.csv file of 100 subnets, instead of adding each in IP search command?
We have got a problem to find a list of 500+ client servers (but less than 1000), which are missing DNS entries. the servers have a pattern luckily . Let's say for example, the server names are my... See more...
We have got a problem to find a list of 500+ client servers (but less than 1000), which are missing DNS entries. the servers have a pattern luckily . Let's say for example, the server names are myhost001 to myhost999 So we want to loop through all those servers and do a dnslookup |makeresults | eval src_host_001="myhost001" ... | eval src_host_999="myhost999" | foreach src_host_* [eval src_host=<<FIELD>>] |lookup dnslookup clienthost as src_host OUTPUTNEW clientip Is there a way, we can loop using a pattern, so I don't want to use a lookup file or external csv? Ideally the logic I'm trying to replicate is for a range of myhost[0-9][0-9][0-9] , loop through each of them and do a dnslookup So something like in sort of for id in $( seq 1 999 ); do { src_host=myhost${id} ##|lookup dnslookup clienthost as ${src_host} OUTPUTNEW clientip } thanks in advance
Hi. I am struggling with this since few days. I sure that I don't understand some steps correct so that's the reason. So I trying to configure sendings logs from my NAS servers (Synology) ... See more...
Hi. I am struggling with this since few days. I sure that I don't understand some steps correct so that's the reason. So I trying to configure sendings logs from my NAS servers (Synology) to my Splunk instance. Logs are correctly receiving when I not use SSL in my Synology sendings log configuration. But when I enable SSL and import certificate in Synology then the logs are receiving but are hashed. I searching for simple instruction how to set up Splunk to receiving Input Data via TCP and self-signed certificate. I generated certificates with this instruction https://docs.splunk.com/Documentation/Splunk/8.0.2/Security/Howtoself-signcertificates I generated files in /opt/splunk/etc/auth/mycerts - CACertificate.csr - CACertificate.pem - CAPrivate.key - ServerCertificate.csr - ServerCertificate.pem - ServerPrivate.key After that I configure my Synology to sendings log via TCP port 514 with enabled SSL and imported CACertificate.pem So I still don't understand how to configure Inputs.conf and server.conf in my Splunk Server to receiving ssl syslog over TCP I've tried to configure like: inputs.conf [tcp-ssl:514] sourcetype = syslog [SSL] rootCA = /opt/splunk/etc/auth/mycerts/CACertificate.pem serverCert = /optsplunk/etc/auth/mycerts/ServerCertificate.pem What I am doing wrong.
Hi there! I'm using this query index="dev" |eval raw_len=len(_raw) | eval raw_len_gb = raw_len/1024/1024/1024 | stats sum(raw_len_gb) as GB by kubernetes_namespace | bin _time span=1d To ... See more...
Hi there! I'm using this query index="dev" |eval raw_len=len(_raw) | eval raw_len_gb = raw_len/1024/1024/1024 | stats sum(raw_len_gb) as GB by kubernetes_namespace | bin _time span=1d To get the amount of received data from K8S logs, and I want to trigger alert using this query, if the amount will be more than 0.5 gig a day. How to define it in alert? Right now, when I'm defining alert that triggers when - "eval raw_len_gb > 0.5", it's not triggered Thanks, ALeksei
Hello, I wanted write a splunk alert for unused volumes in AWS and send slack notification. Any suggestions on this ? Regards, Dennis
I am upgrading Splunk from 6.5 to 7.3, Do i need to upgrade event collector agents as well, Or the existing agents will be compatible with the latest version.
Hi All, We are having one database with 100+ tables which need to onboard and every table having different raising clolumns. could you please provide any suggestion how to add onboard 100+ tab... See more...
Hi All, We are having one database with 100+ tables which need to onboard and every table having different raising clolumns. could you please provide any suggestion how to add onboard 100+ table rather than onboarding manually one by one ?
I am having trouble using the BMC Remedy Add on in a search head cluster environment. First issue I am running into is that the web ui does not go beyond "Loading" on any of the members, worked ar... See more...
I am having trouble using the BMC Remedy Add on in a search head cluster environment. First issue I am running into is that the web ui does not go beyond "Loading" on any of the members, worked around this by creating a "splunk_ta_remedy_settings.conf" in CLI with the settings there. Second issue I am recieving is command="remedyincidentcreate", cannot concatenate 'str' and 'NoneType' objects when running | remedyincidentcreate --help in the search bar. Any thoughts on how to solve this?
Why in the world is this not the default? How can I force it to be the default?
I have Splunk App for Infrastructure installed and configured, it works for Windows agent, but I cannot make it for Linux server. Collectd seems runs well with write_splunk plugin, I run search ... See more...
I have Splunk App for Infrastructure installed and configured, it works for Windows agent, but I cannot make it for Linux server. Collectd seems runs well with write_splunk plugin, I run search index="_introspection" token| spath "data.token_name" | search "data.token_name"="collectd token" looks the HEC is receiving data like the screenshot shows. But there is no data of the metrics index assigned to the HEC token, and search for | mstats count WHERE index=* AND metric_name=* by host, metric_name only Windows host shows.
My splunk architecture is as below: UF - > HF -> IX - SH Here, UF and HF are in same network where as IX and SH are in Splunk cloud. I need to run certain script on UF only on user request. I can... See more...
My splunk architecture is as below: UF - > HF -> IX - SH Here, UF and HF are in same network where as IX and SH are in Splunk cloud. I need to run certain script on UF only on user request. I can not schedule it on time basis. However, considering my SH is in AWS cloud and out of the network boundry, how can I run the script based on user demand. Is there any solution/workaround for this?
Hi Team, Currently I am working on a UF Auto installation script where the script has to automatically upgrade the UF package on all Windows boxes (that have v6.5.3) running to v7.3.4 using this scri... See more...
Hi Team, Currently I am working on a UF Auto installation script where the script has to automatically upgrade the UF package on all Windows boxes (that have v6.5.3) running to v7.3.4 using this script. The script should work as below: Check for any existing Splunk UF version on the Windows box, if it has UF v6.5.3 is already running then, stop the UF agent, upgrade the Splunk UF package - v7.3.4 (Untar the splunkforwarder.tgz) package and then start the Splunk services. Post that it should connect to a DS (updating deploymentclient.conf) with DS and 8090 port details. If the Windows box doesn't have any Splunk UF package installed then, the script should freshly install the UF v7.3.4 package on that Windows server and then connect to DS Wanted to check if you have any reference shell script for the above upgrade/installation. Please note I will just use that script for reference purpose only and I won't use it directly.Request your help on this. regards, Santosh
Hi all, I have json file for each of the builds of jenkins. I want to calculate the Mean time to recovery , that is time taken from a failed build to the next immediate passed build. I listed all t... See more...
Hi all, I have json file for each of the builds of jenkins. I want to calculate the Mean time to recovery , that is time taken from a failed build to the next immediate passed build. I listed all the failed builds but i am not getting to get the value of the next passed build. Can anyone please help me in this!
I've singe SPF forwarding to 3 indexers in a cluster, after changing the file permissions to rw from rwx the splunk forwarder stopped indexing files from input dirs. have seen logs no clues found. An... See more...
I've singe SPF forwarding to 3 indexers in a cluster, after changing the file permissions to rw from rwx the splunk forwarder stopped indexing files from input dirs. have seen logs no clues found. Any suggestions when to look for errors/exceptions. TIA.
I need to pull some logs that happen one hour after 2020-02-29 16:12:26:000, what would be the best time choice to use in Splunk to do this? Sounds super simple but I've gone through the time settin... See more...
I need to pull some logs that happen one hour after 2020-02-29 16:12:26:000, what would be the best time choice to use in Splunk to do this? Sounds super simple but I've gone through the time settings and can't find the best time setting to use for this.
Hello All, Using the below conditions (along with the required conditions) to configure alert earliest=-5h | head 100 The challenge facing is, sometimes observing it is considering the tra... See more...
Hello All, Using the below conditions (along with the required conditions) to configure alert earliest=-5h | head 100 The challenge facing is, sometimes observing it is considering the transactions which has previously already generated an alert for example, an alert got generated at 12:00 PM and then again it is generating at 01:30 PM. This is happening because there were some failures between 11:00 AM and 11:30 AM and even if there is 1 or 2 failures around 1:25 PM, it is considering the failures from 11:00 AM to 11:30 AM. The reason to consider earliest=-5h & head 100 is because the transactions for this application is very less. Thought of using suppression for a longer time (currently it is set to 1 hr) but that might give rise to situations were valid scenario might get miss. Is there any other way (other than reducing the earliest time or head or increasing the suppression) to mitigate this?
My data looks like: { parent_id: 1 child_info: [ { id: 123, status: "PASS" }, { id: 456, statu... See more...
My data looks like: { parent_id: 1 child_info: [ { id: 123, status: "PASS" }, { id: 456, status: "FAIL" } ] } I am trying to print the result in format: parent ID | Child_Passed_count | Child_Failed_count 1 1 1
There are plenty of answers to the question of how to convert Hex into ASCII using a combination of rex/replace and urldecode. However, I am looking to do the opposite. I have binary data in a fi... See more...
There are plenty of answers to the question of how to convert Hex into ASCII using a combination of rex/replace and urldecode. However, I am looking to do the opposite. I have binary data in a field and I would like to convert it to hex values. Sample data: ����$P/�Dl ������E��4K@��v���3>n� � I do not simply want to strip this data because I actually would like to do something with it once it's been converted. I also would like to do it preferably using an eval statement so I can create a field calculation based on this. Any thoughts or ideas?