All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, Current table Application Failure Success A 2 6 B 4 7 C 5 8   Expected Application Failure Success D 11 21   How to add the Applications values and... See more...
Hi Team, Current table Application Failure Success A 2 6 B 4 7 C 5 8   Expected Application Failure Success D 11 21   How to add the Applications values and make it as new application. Also need to sum all the failures and success values. Can anyone help on this? Regards, Madhu R
Is there a way to create a Submit button with different functionality per panel? Lets say 1 panel is to modify and 1 panel is to search. I followed a guide on https://blog.avotrix.com/add-submit-but... See more...
Is there a way to create a Submit button with different functionality per panel? Lets say 1 panel is to modify and 1 panel is to search. I followed a guide on https://blog.avotrix.com/add-submit-button-in-splunk-dashboard-panel/, while yes I can create submit button per panel, they are like the global Submit button - just located at each panel visually instead of at the top of the dashboard. Pressing a Submit button at one panel will also submit the inputs at the other panel.
I have 2 time A anb B is in HH:MM:SS format ..then how to get the difference of A and B in same format
Hi all, I was previously tracking a new Add-on that Splunk were developing for ingesting Google Workspace "audit" data into Splunk. We're already using the excellent existing Add-on that's created ... See more...
Hi all, I was previously tracking a new Add-on that Splunk were developing for ingesting Google Workspace "audit" data into Splunk. We're already using the excellent existing Add-on that's created by Kyle Smith, but were interested to see what the Splunk version would be like too. It looks like it was in a Beta, I could previously access it here: https://splunkbase.splunk.com/app/5556/#/details But all trace of it seems to have disappeared now!? Any ideas what happened to it? Thanks in advance, Stu
Hi All,  I am having some trouble extracing out the following with the following details  1. username  2. Default Msg 3. Date 4. Time This is what I have tried and it gives me the username ... See more...
Hi All,  I am having some trouble extracing out the following with the following details  1. username  2. Default Msg 3. Date 4. Time This is what I have tried and it gives me the username but I am stuck with how to extract the date , time and defaultmsg.  can someone please help me? Thank You so much Index=xxx-xxx  | rex (?<username>\w+@\w+.\w+)  |table username DefaultMsg Date TIme Thank You regards, Alex
Hi All, I have just copied across working props and transforms stanza from one HF to another for sqs logs.  however it’s having issues on using this props and transforms since logs are stopping... See more...
Hi All, I have just copied across working props and transforms stanza from one HF to another for sqs logs.  however it’s having issues on using this props and transforms since logs are stopping and I am getting a message “start writing events to STDOUT” host=“ “ index=“<index>main</index>” stanza= “ “   I am using that transforms to extract hostname index name , source and sourcetype.  any help appreciated! Thanks 
hi I have this issue where the db connect installed on a heavy forwarder is not able to forward logs to the indexers in a particular sourcetype configured in the indexer The heavy forwarder obta... See more...
hi I have this issue where the db connect installed on a heavy forwarder is not able to forward logs to the indexers in a particular sourcetype configured in the indexer The heavy forwarder obtains the logs from the mssql database using http event collector From splunkd log and metric logs i can see the non db connect logs being forwarded on the standard 9997 port We are able to run the sql query in db connect and able to see results Not quite sure where i should troubleshoot and hoping for some leads The db connect version is 3.1.4 and the splunk enterprise version is 7.2.6 I do not see error 400 on the db connect server or command and audit logs tho
Hi, A lot of Splunkers knows how to measure common latency/timeskew in Splunk using _time and _indextime, but who knows to measure the latency in all steps from a UF on it way to the Indexer, wher... See more...
Hi, A lot of Splunkers knows how to measure common latency/timeskew in Splunk using _time and _indextime, but who knows to measure the latency in all steps from a UF on it way to the Indexer, where there could be more Forwarders underway to the indexers (Heavy, Intermediate etc), where latency could raise. The question was really asked here: Indexing latency arising at forwarders? , but never answered. Does anyone know how to nails this information? My idea was somehow to enrich the data in every level, by adding every tier of forwarder to each event with its hostname, and its timestamp, in which way you always would be in control and know the exact source of eventual latency - if you can follow my approach? Ie. Would it be possible to use INGEST_EVAL to add new fields on every new tier the event passes, like: t<no>_host=<host> t<no>_time=<timestamp> This approach will likely also touch on cooked data, and to what extend it's possible to enrich these underway. Let me hear your thoughts and ideas.
I want a report when total events less than 9500000 in a day from sourcetype. Also I tried below query, but its giving me count as 0. | tstats count where index="cb_protect" sourcetype = "carbonbla... See more...
I want a report when total events less than 9500000 in a day from sourcetype. Also I tried below query, but its giving me count as 0. | tstats count where index="cb_protect" sourcetype = "carbonblack:protect" subtype=* | search count<9500000 Need a help in this scenario
Hello all,   when I specified multiple sourcetype explicitly, I am getting some extra data  in my cim mapping charts however for corresponding query I am getting expected data only  for default sin... See more...
Hello all,   when I specified multiple sourcetype explicitly, I am getting some extra data  in my cim mapping charts however for corresponding query I am getting expected data only  for default single source type data was coming fine. So if there is any suggestion from any one it will be helpful.
Hi all, We use Splunk and Splunk Forwarder for our project. Splunk is installed on EC2 and Forwarder is part of our installation package. So when clients install our app, it's installed with Splunk ... See more...
Hi all, We use Splunk and Splunk Forwarder for our project. Splunk is installed on EC2 and Forwarder is part of our installation package. So when clients install our app, it's installed with Splunk Forwarder. So, our question how can we protect Splunk Forwarder from uninstalling by user in this case? For our app, we use uninstall password, a user needs to enter password for removing it. Or, maybe does exist someway to say to a user, this Splunk Forwarder is a part of our app, when he will try to remove it? Or, maybe in our situation we need to use an another way for forwarding logs to Splunk (w/o Splunk Forwarder)?
Been experimenting with ML toolkit and having some weird issues. I can get nice predictions by teaching the data but when trying to visulize and show the data on a table I get some issues. The data a... See more...
Been experimenting with ML toolkit and having some weird issues. I can get nice predictions by teaching the data but when trying to visulize and show the data on a table I get some issues. The data and the prediction don't seem to align by time even thou the time field is same.  
We are using rapid7 for vulnerability scanner and it is detecting vuln in Cipher negotiation. It says Splunk is negotiating below Ciphers: PORT STATE SERVICE VERSION 8443/tcp open ssl/http Splun... See more...
We are using rapid7 for vulnerability scanner and it is detecting vuln in Cipher negotiation. It says Splunk is negotiating below Ciphers: PORT STATE SERVICE VERSION 8443/tcp open ssl/http Splunkd httpd |_http-server-header: Splunkd | ssl-enum-ciphers: | TLSv1.2: | ciphers: | TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (secp256r1) - A | TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A | TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (secp256r1) - A | TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256r1) - A | TLS_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) - A | TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A | TLS_RSA_WITH_AES_256_CBC_SHA256 (rsa 2048) - A | TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A | compressors: | NULL | cipher preference: client |_ least strength: A As per Rapid7 solution, below negotiation should not be used. | TLS_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) - A | TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A | TLS_RSA_WITH_AES_256_CBC_SHA256 (rsa 2048) - A | TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A How can we resolve this issue?
Sorry, I have a newbie question.   I want to add a deployment client to an existing server class that has apps which are triggered to restart splunk after the install. I am ok with restarting the ne... See more...
Sorry, I have a newbie question.   I want to add a deployment client to an existing server class that has apps which are triggered to restart splunk after the install. I am ok with restarting the newly added client, but was wondering if that will also restart all of the other existing clients? No changes to the apps for the existing clients.  
I am using the Splunk Add-on for Microsoft Cloud Services to pull data from an Azure Event Hub.  What I would like to know is whether there are any known limitations that anyone has run into   * Ho... See more...
I am using the Splunk Add-on for Microsoft Cloud Services to pull data from an Azure Event Hub.  What I would like to know is whether there are any known limitations that anyone has run into   * How many events can be pulled when the API call is made? The default is 300 per API connection * Whether these are any limits on how often it can be pulled? The default is 300 seconds, but can you say pull every 15 seconds or 30 seconds etc. * Whether there is a limit between how many events can be pulled based on the how often it is pulled?  Can I pull 3000 events every 10 seconds or is there a hard limit/
Hello Splunkers What is the recommended SPLUNK versions to upgrade for the below enterprise versions 7.0.0 7.3.0 7.1.2 Thanks Suresh
I am trying to implement a simple Splunk system on my local computer to learn a bit about how you set up forwards and get data into Splunk.   I am running Splunk Enterprise on a CentOS 8 virtual mac... See more...
I am trying to implement a simple Splunk system on my local computer to learn a bit about how you set up forwards and get data into Splunk.   I am running Splunk Enterprise on a CentOS 8 virtual machine, and I've installed a Universal Forwarder on the system that is running the virtual machine.  I've set up Splunk to receive data over port 9997, and have ensured that port 9997 is open and listening in CentOS. On my main system I installed the Universal Forwarder and directed it to 192.168.0.21:9997 (my client is accessed at 192.168.0.21:8000).    Outputs.conf:     [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = 192.168.0.21:9997 [tcpout-server://192.168.0.21:9997]     I am not using a deployment server. I'm using Bitdefender on my laptop and have made sure there's a rule in the firewall to allow traffic to 192.168.0.21:9997.  I've also reset the UF, Splunk Enterprise, and the VM running Splunk Enterprise.   When I go in to Add Data > Forward, it still says "There are currently no forwarders configured as deployment clients to this instance."   I'm sure I'm just missing something in the setup steps, but I cannot figure out what it is.   ----------------   Here are the main repeating messages from splunkd.log: 08-26-2021 16:21:40.575 -0800 INFO AutoLoadBalancedConnectionStrategy [12416 TcpOutEloop] - Found currently active indexer. Connected to idx=192.168.0.21:9997, reuse=1. 08-26-2021 16:21:40.991 -0800 ERROR ExecProcessor [5456 ExecProcessor] - message from "D:\Cybersecurity\SplunkUniversalForwarder\bin\splunk-admon.exe" splunk-admon - GetLocalDN: Failed to get object 'LDAP://rootDSE': err='0x8007054b' - 'The specified domain either does not exist or could not be contacted.' 08-26-2021 16:21:40.991 -0800 ERROR ExecProcessor [5456 ExecProcessor] - message from "D:\Cybersecurity\SplunkUniversalForwarder\bin\splunk-admon.exe" splunk-admon - getBasePath: Unable to query local DN, restart and specify base path to monitor 08-26-2021 16:21:40.991 -0800 ERROR ExecProcessor [5456 ExecProcessor] - message from "D:\Cybersecurity\SplunkUniversalForwarder\bin\splunk-admon.exe" splunk-admon - SplunkADMon::configure: Failed to configure AD Monitor
Hello,   I have a requirement where i need to extract part of JSON code from splunk log and assign that field to spath for further results My regex is working in regex101 but not in splunk  b... See more...
Hello,   I have a requirement where i need to extract part of JSON code from splunk log and assign that field to spath for further results My regex is working in regex101 but not in splunk  below is log snippet --looking to grab the JSON code starting from {"unique_appcodes to end of line..i have shown the expected output below in the post     cwmessage: 2021-08-26 17:14:10 araeapp INFO MRC: Unique AppCodes Report requested. 2021-08-26 17:14:10 araeapp INFO MRC_ARAE_I_042: (local) requesting uniq_appcodes report for KKA 2021-08-26 17:14:10 araeapp INFO {"unique_appcodes": [{"count": 2, "app_code": "XYZ", "group": "", "instance": "KKA"}, {"count": 2, "app_code": "QQQ", "group": "TSR05441", "instance": "KKA"}, {"count": 1, "app_code": "QQQ", "group": "", "instance": "KKA"}, {"count": 192, "app_code": "PPP", "group": "TSR05560", "instance": "KKA"}, {"count": 12, "app_code": "PPP", "group": "", "instance": "KKA"}, {"count": 12, "app_code": "GM9", "group": "TSR06083", "instance": "KKA"}, {"count": 139, "app_code": "ZZZ", "group": "TSR06103", "instance": "KKA"}, {"count": 6, "app_code": "GNA", "group": "TSR06085", "instance": "KKA"}, {"count": 803, "app_code": "SSS", "group": "MXXX0718", "instance": "KKA"}, {"count": 3, "app_code": "SSS", "group": "", "instance": "KKA"}]}           Rex using: | rex field=_raw (?msi)(?<json_field>\{\"unique_appcodes\".+\}$)     and this perfectly working in regex101.com which is extracting  the below required part but when i use this in SPlunk its not giving any results im thinking its the spaces between the JSON attributes Please let me know your thoughts      {"unique_appcodes": [{"count": 2, "app_code": "XYZ", "group": "", "instance": "KKA"}, {"count": 2, "app_code": "QQQ", "group": "TSR05441", "instance": "KKA"}, {"count": 1, "app_code": "QQQ", "group": "", "instance": "KKA"}, {"count": 192, "app_code": "PPP", "group": "TSR05560", "instance": "KKA"}, {"count": 12, "app_code": "PPP", "group": "", "instance": "KKA"}, {"count": 12, "app_code": "GM9", "group": "TSR06083", "instance": "KKA"}, {"count": 139, "app_code": "ZZZ", "group": "TSR06103", "instance": "KKA"}, {"count": 6, "app_code": "GNA", "group": "TSR06085", "instance": "KKA"}, {"count": 803, "app_code": "SSS", "group": "MXXX0718", "instance": "KKA"}, {"count": 3, "app_code": "SSS", "group": "", "instance": "KKA"}]}              
Hi I've previously used imdsv1 on my EC2 instances to provide role credentials to allow my EC2 Splunk instance to reach across accounts to grab files.  I'm interested to find out if Splunk supports... See more...
Hi I've previously used imdsv1 on my EC2 instances to provide role credentials to allow my EC2 Splunk instance to reach across accounts to grab files.  I'm interested to find out if Splunk supports imdsv2 for credentials?  I haven't been able to find anything (nor get this to work).   Thanks!
How do I look for a Report by name in Splunk Enterprise / ES please. I ran out of tricks I know. Please advise.