All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team,   I want to calculate the % based on two different tables where I am using addcoltotals to calculate grand total. table 1: A  B  C v1 v2 v3 v4 v5 v6 T1 T2 T3 --- > grand total ... See more...
Hi Team,   I want to calculate the % based on two different tables where I am using addcoltotals to calculate grand total. table 1: A  B  C v1 v2 v3 v4 v5 v6 T1 T2 T3 --- > grand total table 2: A B C x1 x2 x3 x4 x5 x6 T4 T5 T6  -> grand total   I want to calculate : F1 =T1/T4 F2 = T2/T5 F3 = T3/T6   could you please help me to find the solution.   can I store values of T in token and use eval command to calculate the results.   looking forward for the help  
https://apps.splunk.com/app/4770 Servicenow security operations event ingestion addon for splunkes And https://apps.splunk.com/app/3921 Servicenow security operations addon Both are supporting ... See more...
https://apps.splunk.com/app/4770 Servicenow security operations event ingestion addon for splunkes And https://apps.splunk.com/app/3921 Servicenow security operations addon Both are supporting on-demand based incident creation in servicenow so what is the actual difference here. Anyone have any idea? Event ingestion add on required license addon from servicenow that I know. It is the only difference or something else also?
Hello, Is it possible to put realtime panel and dynamic panel in one dashboard ? If so, please provide me some input on how to implement this
Hi Team, Any one has integrated below application with SPlunk if yes. Please suggest how Chromeleon- Chromatography Data System (CDS) software- Built with both the lab and IT in mind, this softwa... See more...
Hi Team, Any one has integrated below application with SPlunk if yes. Please suggest how Chromeleon- Chromatography Data System (CDS) software- Built with both the lab and IT in mind, this software delivers superior compliance tools, networking capabilities, instrument control, automation, and much more! Robotics DBC
Hi team, I am new to Splunk please help me here We have integrated one Algosec application with SPlunk Via Syslog method and collecting Audit logs Means successful login/ unsuccessful login to ... See more...
Hi team, I am new to Splunk please help me here We have integrated one Algosec application with SPlunk Via Syslog method and collecting Audit logs Means successful login/ unsuccessful login to Algosec Application. In logs we are getting only Algosec application IP but not source IP(Which is actually trying to login). We have checked in AD logs based on username, target IP(Algosec IP) as well but not able see any information of source IP.  LDAP configuration is done on Algosec application So my question is which method is useful to get actual source IP 1. Could I get Source IP in Ldap audit logs via event viewer. if yes then how I can forward this logs to Splunk from Event viewer Windows Event Viewer > Applications and Services Logs > Directory Service https://www.manageengine.com/products/active-directory-audit/how-to/images/how-to-audit-ldap-queries-active-directory-2.png 2. Splunk Supporting Add-on for Active Directory (SA-LDAPSearch)- This Ad-on is useful to get LDAP audit logs to find out source IP https://docs.splunk.com/Documentation/SA-LdapSearch/3.0.3/User/AbouttheSplunkSupportingAdd-onforActiveDirectory
Hello All I wonder if you could share if there is an Splunk decoder for the "Splunk add-on for AWS" that can work well with Control Tower - currently Control Tower sends both CloudTrail and AWS Co... See more...
Hello All I wonder if you could share if there is an Splunk decoder for the "Splunk add-on for AWS" that can work well with Control Tower - currently Control Tower sends both CloudTrail and AWS Config to the same SNS Topic in which we've add a SQS queue subscribed to it. Because the queue now has both CloudTrail and Config,  choosing the "config" decoder generates errors such as the below because most of the messages are CloudTrail. records = document['configurationItems'] KeyError: 'configurationItems'  Thank you very much, Marco
I need to stop ingesting from 1 of 4 of my firewalls.  The path of our architecture is  firewalls >>>syslog>>>>deployment server>>indexer cluster>>>>search head I have tried commenting them out und... See more...
I need to stop ingesting from 1 of 4 of my firewalls.  The path of our architecture is  firewalls >>>syslog>>>>deployment server>>indexer cluster>>>>search head I have tried commenting them out under deployment apps (inputs.conf ) in the deployment server, but  I am still seeing ingestion from that firewall.  Any help is appreciated! 
stats count(eval(searchmatch(Bala))) as A count(eval(searchmatch(kasa))) as B count(eval(searchmatch(reddy))) as C  A B C 1 2 3   now i want the total of these row val... See more...
stats count(eval(searchmatch(Bala))) as A count(eval(searchmatch(kasa))) as B count(eval(searchmatch(reddy))) as C  A B C 1 2 3   now i want the total of these row values as single table   Total 6
Trying to convince my boss to switch to Splunk but the biggest issue is ESM's ease of use. Everything is pretty much plug n play on ESM where Splunk takes a lot of work to get the same results. Here ... See more...
Trying to convince my boss to switch to Splunk but the biggest issue is ESM's ease of use. Everything is pretty much plug n play on ESM where Splunk takes a lot of work to get the same results. Here are a couple of examples: 1. ESM he can go in and easily create a watchlist with all the server IP's and add that to an alert to tell it not to alert if these IP's are in the src or dest. 1. Splunk i have to create a lookup then create a lookup definition then create the macro and if i want to add an IP to the file i have to go into the CLI and add the new IP 2. ESM has an alert page that you can look at all the alerts that have popped and check a box when you have decided that you have done enough investigate on that alert which allows other analyst to know that alert has already been investigated 2. I have not seen anything like this in Splunk without spending more money. 3. ESM really has no easy way to query on the fly so Splunk does win this one 4. He wants a solution that a new analyst can sit down with minimal training and use. I dont like the idea of button pushers but we don't have months or years to train a new analyst. (or the money) If i could resolve issue 1 and 2 so anyone could do that stuff without having to be a programmer and do it all in the GUI i think i could convince him. Licensing cost is not an issue, both are already licensed. Any ideas?
I am trying to make a few changes to this splunkbase app (github link)  to make it Splunk Cloud compatible. The app seems to need a few very small changes considering the recent requirement for jQu... See more...
I am trying to make a few changes to this splunkbase app (github link)  to make it Splunk Cloud compatible. The app seems to need a few very small changes considering the recent requirement for jQuery 3.5. Appinspect gives the following report. First error should be fixed by just adding "version=1.1" in the xml dashboard. Second error also seems fairly simple. But I am not exactly sure where to find these dependencies and how to add them in the app.  Any help please?
I am trying to create a setup page which lets user add multiple accounts. ( The number of accounts to be added is unknown so I decided to use the Global Account options for this) I want to get... See more...
I am trying to create a setup page which lets user add multiple accounts. ( The number of accounts to be added is unknown so I decided to use the Global Account options for this) I want to get the value of the 'Account Name' column in the python script generated by add on builder called 'modalert_appname.py'. By default, the script comes with a helper function called helper.get_user_credential("<account_name>"). The problem is that this function takes in the parameter 'username' and I want to use a function which takes in the 'account_id' as its argument. Technically, this would be possible by using the method helper.get_user_credential_by_id(account_id) as mentioned in the python helper function documentation. But when i try to use this function, it gives me the error saying that 'object has no attribute' (meaning that it is not defined for this helper function). Any ideas about how to overcome this problem?
Hi Team, I am looking to get incremental count of some data in dashboard. For example : If the count for a certain table for past 5 days is 500 , on day 6 I need to get incremental data. that is ... See more...
Hi Team, I am looking to get incremental count of some data in dashboard. For example : If the count for a certain table for past 5 days is 500 , on day 6 I need to get incremental data. that is 5 days data + 6th day data. Likewise i need to capture for a week . Kindly assist 
i have  a dashboard, is there a way to move that dashboard studio screen to another server? please provide some documentation.
Hello everyone, I have set an Adaptive Response Action (custom bash script) along with a Notable event on a simple correlation search. The Notable triggers but the script not. The script is used ... See more...
Hello everyone, I have set an Adaptive Response Action (custom bash script) along with a Notable event on a simple correlation search. The Notable triggers but the script not. The script is used to initiate a tcpdump capture on an indexer. The script is placed under: - /opt/splunk/etc/apps/SplunkEnterpriseSecurity/bin/tcpdump.sh - /opt/splunk/bin/scripts/tcpdump.sh Owner: splunk  Permissions: 755 tcpdump.sh   #!/bin/bash #Initiate tcpdump (3 dumps for 5mins each) tcpdump -i ens33 -G 300 -W 3 -w /mnt/nfs/pcaps/pcap-%Y-%m-%d_%H.%M.%S   I tried to create an app with an Adaptive Response Action with Addon-Builder but my coding skills are not good. How can I troubleshoot why the script is not running at all? Thanks Chris
Hi Team,  I have the following result in place with 30min bucket using stats values() and then xyseries  time            field1                   field2                      field3               ... See more...
Hi Team,  I have the following result in place with 30min bucket using stats values() and then xyseries  time            field1                   field2                      field3                   field4 05:30 4,10,11,12,30 1,13,14,9,8,7 5,7,3,8,9,1,55 23,24,17,18,19 06:00 19,10,11,12,30 12,3,14,9,8,7 1,17,3,8,1,34 22,2,25,17,18,19 06:30 20,10,11,12,55 11,13,14,9,18,7 10,7,3,8,9,1,4 23,24,26,1,18,49 07:00 21,10,11,12,44 12,13,17,9,7 6,7,3,9,1,23 23,24,25,17,18,19 07:30 31,10,11,12,50 1,13,14,9,8,7 5,7,3,8,9,11 23,24,25,17,18,19 08:00 1,10,11,12,30,88 12,13,14,9,81 5,7,3,8,9,17 23,24,25,17,18,19 08:30 1,10,11,12,30,99 12,13,14,9,81 5,7,3,8,9,18 23,24,25,17,18,19 09:00 1,11,12,30,23 11,1,14,9,7 10,7,3,8,9,18 23,24,25,17,18,19 09:30 1,10,11,12,300 12,13,4,9,8,7 4,7,3,8,9,1 23,24,25,17,18,19   Currently the result shows all the values for each field. What I am looking here is the top 3 values which has maximum count for each field, not sure how to pull that result. Request someone to guide.
Hello everyone, I am trying to create a custom alert action where tcpdump capture will be triggered for the event's src and dest IPs. I created a simple bash script for that:    #!/bin/bash #... See more...
Hello everyone, I am trying to create a custom alert action where tcpdump capture will be triggered for the event's src and dest IPs. I created a simple bash script for that:    #!/bin/bash #Initiate tcpdump (3 dumps for 5mins each) tcpdump -i ens33 -G 300 -W 3 -w /mnt/nfs/pcaps/pcap-%Y-%m-%d_%H.%M.%S   My problem is that this does not contain the src and dest IPs of the correlation event triggered. How can I pass these variables here in order not to capture the whole traffic, but only the one related between these two hosts? Thanks Chris 
hello I count results by _time in a table panel like this and it works perfectly When the results is 0 the result is displayed only once there is a result on the bin _time after For example, if... See more...
hello I count results by _time in a table panel like this and it works perfectly When the results is 0 the result is displayed only once there is a result on the bin _time after For example, if at 7h, the result is = 0 but at 8h the result is = 1, the results for 7h and 8h are correctly displayed But as long as the result is 0, nothing is displayed     index=toto | bin span=1h _time | stats count as Pb by s _time | search Pb >= 3 | timechart dc(s) as s span=1h | where _time < now() | eval time = strftime(_time, "%H:%M") | stats sum(s) as nbs by time | rename time as Heure     so I tried this for displaying results = 0 but it doesnt works could you help please?     | eval nbs =if(isnull(nbs, 0, nbs)      
Hi, what's the latest version of splunk thats free? Thanks!
Hi i want to extract the mac_algorithms field with regex from a nmap scan result. Does anyone have an idea how it works best? I've tried a few things, not all fields are found in Splunk.  Here you ... See more...
Hi i want to extract the mac_algorithms field with regex from a nmap scan result. Does anyone have an idea how it works best? I've tried a few things, not all fields are found in Splunk.  Here you can see my example: https://regex101.com/r/eJ16fA/1 Here my nmap-scanning example: kex_algorithms: (8) curve25519-sha256@libssh.org ecdh-sha2-nistp256 ecdh-sha2-nistp384 ecdh-sha2-nistp521 diffie-hellman-group-exchange-sha256 diffie-hellman-group-exchange-sha1 diffie-hellman-group14-sha1 diffie-hellman-group1-sha1 server_host_key_algorithms: (4) ssh-rsa ssh-dss ecdsa-sha2-nistp256 ssh-ed25519 encryption_algorithms: (14) aes128-ctr aes192-ctr aes256-ctr arcfour256 arcfour128 chacha20-poly1305@openssh.com aes128-cbc 3des-cbc blowfish-cbc cast128-cbc aes192-cbc aes256-cbc arcfour rijndael-cbc@lysator.liu.se mac_algorithms: (19) hmac-md5-etm@openssh.com hmac-sha1-etm@openssh.com umac-64-etm@openssh.com umac-128-etm@openssh.com hmac-sha2-256-etm@openssh.com hmac-sha2-512-etm@openssh.com hmac-ripemd160-etm@openssh.com hmac-sha1-96-etm@openssh.com hmac-md5-96-etm@openssh.com hmac-md5 hmac-sha1 umac-64@openssh.com umac-128@openssh.com hmac-sha2-256 hmac-sha2-512 hmac-ripemd160 hmac-ripemd160@openssh.com hmac-sha1-96 hmac-md5-96 compression_algorithms: (2) none zlib@openssh.com"
Hello I use 2 separate search almost identical Now I want to merge these 2 search in one search Here is the search   index=toto sourcetype="cit" type=* earliest=@d+7h latest=@d+19h | field... See more...
Hello I use 2 separate search almost identical Now I want to merge these 2 search in one search Here is the search   index=toto sourcetype="cit" type=* earliest=@d+7h latest=@d+19h | fields citrtt s | bin span=1h _time | search citrtt > 150 | stats count as PbPerf by s _time | search PbPerf >= 2 | timechart dc(s) as s span=1h | where _time < now() | eval time = strftime(_time, "%H:%M") | stats sum(s) as nbs by time | rename time as Heure       index=toto sourcetype="cit" type=* earliest=@d+7h latest=@d+19h | fields citnet s | bin span=1h _time | search citnet > 80 | stats count as PbPerf by s _time | search PbPerf >= 2 | timechart dc(s) as s span=1h | where _time < now() | eval time = strftime(_time, "%H:%M") | stats sum(s) as nbs by time | rename time as Heure   could you help please?