All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunkers, I need help with Network Security Group flow logs where  each of the tuples should be a single event  with other relevant data for an event. Sample.log _raw: {"time":"2021-10... See more...
Hello Splunkers, I need help with Network Security Group flow logs where  each of the tuples should be a single event  with other relevant data for an event. Sample.log _raw: {"time":"2021-10-25T16:17:50.8670851Z","systemId":"1c5751f4-8686-4ea5-82ee-173b64d401dd","macAddress":"xxxxxxxxxx","category":"NetworkSecurityGroupFlowEvent","resourceId":"/SUBSCRIPTIONS/A80612A2-33D6-47FF-817A-283E8BC8EDD2/RESOURCEGROUPS/C-SAP-EUS-NONPROD-01-INT-NETWORKING-RG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/DATA-INT-SUBNET-NSG","operationName":"NetworkSecurityGroupFlowEvents","properties":{"Version":2,"flows":[{"rule":"DefaultRule_AllowVnetOutBound","flows":[{"mac":"000D3A57248C","flowTuples":["1635178607,,10.123.2.28,46058,9997,T,O,A,E,1,74,1,60","1635178607,10.115.34.31,10.123.2.18,29128,9997,T,O,A,E,19,7292,16,1227","1635178609,10.115.34.31,10.119.241.5,26540,9997,T,O,A,E,47,54806,64,4395","1635178612,10.115.34.31,13.69.239.72,56024,443,T,O,A,B,,,,","1635178613,10.115.34.31,13.69.239.72,56026,443,T,O,A,B,,,,","1635178614,10.115.34.31,10.192.124.221,56488,80,T,O,A,B,,,,","1635178618,10.115.34.31,13.69.239.72,56024,443,T,O,A,E,8,1158,8,4897"]}]},{"rule":"UserRule_AzAppSubnet_access_toAzDBSubnet_Catch-all","flows":[{"mac":"000D3A57248C","flowTuples":["1635178635,10.115.32.28,10.115.34.31,54322,33015,T,I,A,B,,,,"]}]}]}} Json format    category: NetworkSecurityGroupFlowEvent    macAddress: xxxxxxxxxx    operationName: NetworkSecurityGroupFlowEvents    properties: { [-]      Version: 2      flows: [ [-]        { [-]          flows: [ [-]            { [-]              flowTuples: [ [-]                1635172376,ip1,ip2,58636,443,T,O,A,E,6,1611,1,66                1635172377,ip1,ip2,27910,443,T,O,A,B,,,,                1635172377,ip1,ip2,59136,443,T,O,A,E,0,0,0,0                1635172378,ip1,ip2,56756,9997,T,O,A,B,,,,                1635172378,ip1,ip2,58686,9997,T,O,A,B,,,,                1635172379,ip1,ip2,53684,9997,T,O,A,B,,,, Result: Event 1: category: NetworkSecurityGroupFlowEvent    macAddress: xxxxxxxxxx    operationName: NetworkSecurityGroupFlowEvents    properties: { [-]      Version: 2      flows: [ [-]        { [-]          flows: [ [-]            { [-]              flowTuples: [ [-]                1635172376,ip1,ip2,58636,443,T,O,A,E,6,1611,1,66               Event2: category: NetworkSecurityGroupFlowEvent    macAddress: xxxxxxxxxx    operationName: NetworkSecurityGroupFlowEvents    properties: { [-]      Version: 2      flows: [ [-]        { [-]          flows: [ [-]            { [-]              flowTuples: [ [-]               1635172377,ip1,ip2,27910,443,T,O,A,B,,,,                 Thanks
Hello everyone, I have the following inputs.conf file which is actually working for the first 2 stanza, but not for the third one. could someone please tell me why? I do not get any events from them... See more...
Hello everyone, I have the following inputs.conf file which is actually working for the first 2 stanza, but not for the third one. could someone please tell me why? I do not get any events from them. [WinEventLog://Security] disabled = 0 renderXml = 1 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 sourcetype = XmlWinEventLog index = ad whitelist1=4624,4769,4728,4732,4756,4761,4751,4746 # This stanza will send all events for the event_code 21 [WinEventLog://Microsoft-Windows-TerminalServices-LocalSessionManager/Operational] disabled = 0 renderXml = 1 sourcetype = XmlWinEventLog index = ad source="XmlWinEventLog:Microsoft-Windows-TerminalServices-LocalSessionManager/Operational" whitelist2=21 # This stanza will send all events for the event_code 1024 [WinEventLog://Microsoft-Windows-TerminalServices-RemoteConnectionManager/Operational] disabled = 0 renderXml = 1 sourcetype = XmlWinEventLog index = ad source="XmlWinEventLog:Microsoft-Windows-TerminalServices-RemoteConnectionManager/Operational" whitelist3=1024   Thank you very much for helping me!  
I've deployed  an architecture with a centralized s3 bucket that forwards AWS logs to an SQS queue. In Splunk side, I have an enterprise edition, already installed the Splunk Add-ons for AWS, set the... See more...
I've deployed  an architecture with a centralized s3 bucket that forwards AWS logs to an SQS queue. In Splunk side, I have an enterprise edition, already installed the Splunk Add-ons for AWS, set the input as Custom>SQS and Configurations as follows: - account number, access keys - IAM role with assume role permissions. I stil can't get logs in Splunk, any guidance for trouble-shooting? Also, is it possible to share a reference example of SQS access policies?   Thanks
Hello All, I am a Newbie to ES and need some help on a basic use case of ES.    We are ingesting our firewall logs into  Splunk.  How can i setup a search to check connection attempts ( as in dest_i... See more...
Hello All, I am a Newbie to ES and need some help on a basic use case of ES.    We are ingesting our firewall logs into  Splunk.  How can i setup a search to check connection attempts ( as in dest_ip) going to Malicious IPs/ CNC ip-addresses?  index=cisco  eval connection = if (dest_ip=(From Threat_intel_List)) ,  generate an alert or show data in table format  We don't want to rely on manually creating a lookup file and keep on manually updating it.
I'm trying to use the map command and it seems to fail when I try using some functions within the subsearch (specifically: cidrmatch()).    This search returns a correctly-populated table of all th... See more...
I'm trying to use the map command and it seems to fail when I try using some functions within the subsearch (specifically: cidrmatch()).    This search returns a correctly-populated table of all the fields except for the "matches" field which is just empty  index=my_index earliest=-5m | table _time src_ip | map search=" | search index=my_other_index  earliest=-6h | rename id as id2 | dedup id2 | eval searchip=$src_ip$ | eval matches=if(cidrmatch(cidr_block, searchip), "true", "false") | table id2 searchip matches cidr_block" Note: my goal is to join two searches but not based on a common field, based on cidrmatching ips from one search to the cidrblocks in the other. I don't want to use lookup tables as I want both to be dynamic. 
Hello, I am new to Splunk and I am looking for a way to write a rule to detect SMB traffic.   Thanks
I was wondering... Can I get (probably from _internal) which reports and dashboards were executed by users? I supposed that saved searches spawned by scheduler should be possible to either find in c... See more...
I was wondering... Can I get (probably from _internal) which reports and dashboards were executed by users? I supposed that saved searches spawned by scheduler should be possible to either find in configuration (by checking their schedule) or by tracking scheduler logs. But ad-hoc ones? Use case is - users created many different dashboards and reports and we want to clean the ones not used anymore. But first of course we need to find them.
I am completely new to splunk and have to deploy it in our environment. Can i get some guidance on best practices for deployment? I have 3 physical CentOS boxes. What would you set each on up with?... See more...
I am completely new to splunk and have to deploy it in our environment. Can i get some guidance on best practices for deployment? I have 3 physical CentOS boxes. What would you set each on up with?  Splunk1 - configured RAID 10 - 5 TBssd Splunk2 -  500 GB ssd Splunk3 - 500GB ssd Any advice is appreciated, thanks!
Below query is producing expected result only sometime, but not working for similar data on some other random days. Query: index=my_summary source=app_response_status report=app_response_status Api... See more...
Below query is producing expected result only sometime, but not working for similar data on some other random days. Query: index=my_summary source=app_response_status report=app_response_status ApiName=metadata | timechart span=1d sum("200"), sum("404") Working Data: 10/24/2021 00:00:00 +0000, search_name=app_response_status, search_now=1635123600.000, info_min_time=1635033600.000, info_max_time=1635120000.000, info_search_time=1635124485.280, 200=7552, 404=7582, ApiName=metadata, info_sid=scheduler__gmrm_VkEtdm1lLXJ0bXMtc2g__RMD50cd89fe00e4c64f8_at_1635123600_39072, RowTotals=15134, info_max_time="1635120000.000", info_min_time="1635033600.000", info_search_time="1635124485.280", report=app_response_status Not Working Data: 09/03/2021 00:00:00 +0000, search_name=app_response_status, search_now=1630717200.000, info_min_time=1630627200.000, info_max_time=1630713600.000, info_search_time=1630717575.202, 200=9483, 404=5287, ApiName=metadata, info_sid=scheduler__gmrm_VkEtdm1lLXJ0bXMtc2g__RMD50cd89fe00e4c64f8_at_1630717200_72746, RowTotals=14770, info_max_time="1630713600.000", info_min_time="1630627200.000", info_search_time="1630717575.202", report=app_response_status  I am not able to figure out the problem, both data looks same to me, but not sure why it is not working. pls help.
HI All I have IP flow based information being ingested into Splunk, which consists of source_ip, source_port, destination_ip, destination_port.  Occasionally, due to the environmental factors, we g... See more...
HI All I have IP flow based information being ingested into Splunk, which consists of source_ip, source_port, destination_ip, destination_port.  Occasionally, due to the environmental factors, we get a duplicate log of the flow in the reverse direction.  E.g. source_ip                   source_port                    destination_ip                  destination_port 1.1.1.1                        42000                                  2.2.2.2                                     80                     <-  Keep this 2.2.2.2.                        80                                         1.1.1.1                                       42000            <- I would like to discard this  1.1.1.5                       42300                                  2.2.2.2                                      80 3.3.3.3                       134                                       5.5.5.5.                                      80        My goal is to identify and ultimately filter out the duplicated entries. What I am having trouble with is coming up with a query to flag events where there is a duplicate entry (in reverse direction). I can then filter out the “flagged” duplicate entries where say source_port < destination_port. I am trying to avoid using computational heavy commands such as nested searches as the data set is quite large.  Would greatly appreciate some ideas or assistance on how this can be tackled.
Hi Champions, In this below mentioned dataset. I want to create a conditional splunk query.  Ex: I want to check first whether rsyslog service is stopped, if it stopped then who stopped it, in whic... See more...
Hi Champions, In this below mentioned dataset. I want to create a conditional splunk query.  Ex: I want to check first whether rsyslog service is stopped, if it stopped then who stopped it, in which server, then display the results in a table.  Can you please help ? Oct 25 16:30:06 keybox sudosh: KHYJS6PxEI64zG Henry: service rsyslog start Oct 25 16:30:02 keybox sudosh: KHYJS6PxEI64zG Joseph: #011service rsyslog stop Oct 25 15:15:30 keybox sudosh: ssNjFZca22OvaB Henry: service rsyslog stop Oct 25 15:08:26 keybox sudosh: ssNjFZla22OvaB Henry: #011service rsyslog start Oct 25 15:07:46 keybox sudosh: ssNjFZla22OvaB Joseph: service rsyslog status Oct 25 15:06:21 keybox sudosh: ssNjF0la22OvaB Asher: service rsyslog statutss Oct 25 14:49:57 eqc-03-tpp sudosh: gkrMz1dLey0CS1 John: cat /etc/red#011#177#177#177#177#177#177#177#177#177#177#177#177#177#177#177r#177#177#177#177#177#177#177#177#177#177#177#177#177sys#177#177ervice rsyslog status Oct 25 14:48:26 keybox sudosh: VSjTDhPH3iM5MY Ahser: service rsyslog status Fields are: Date and Time = Oct 25 16:30:06 host = keybox index = sudosh_app sourcetype = sudosh I tried with the below mentioned query, but unable to create a conditional query.  index = sudosh_app_protected  host = * |eval "Critical Logging Events:" = "Rsyslog was Stopped on " + host, "Date and Time" = MonthDateTime, "User" = UserName, "Source" = sourcetype |table "Date and Time","Critical Logging Events:" , "User", "Source" Please help. Thank you in advance. 
Hi Experts, |search filed1=Enabled OR "Enabled" OR "Disabled" OR Disabled The above search is returning four rows. If i try to sum them based on the status ,still its returning four rows. |stats ... See more...
Hi Experts, |search filed1=Enabled OR "Enabled" OR "Disabled" OR Disabled The above search is returning four rows. If i try to sum them based on the status ,still its returning four rows. |stats sum(dc(servers)) by Status Status---> dc(servers) Enabled--> 10 Enabled--> 3 Disabled-> 23 Disabled->6 Thank you.    
Hello SPLUNKERS, We are seeing this error while integrating the SQL DB using DB connect add-on . Kindly let me know what the error is.  Version :  It is Sql Server 2014 EE The driver could not... See more...
Hello SPLUNKERS, We are seeing this error while integrating the SQL DB using DB connect add-on . Kindly let me know what the error is.  Version :  It is Sql Server 2014 EE The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "Certificates do not conform to algorithm constraints". ClientConnectionId:xxxxxxxxxxxxxxx
I have a rather complicated query that go like this:     index=* source=* earliest=-4mon@mon latest=@mon RESPONSE_CODE="0" | bin _time span=1mon | stats count AS MonthTotal1 SUM(AMOUNT) AS Month... See more...
I have a rather complicated query that go like this:     index=* source=* earliest=-4mon@mon latest=@mon RESPONSE_CODE="0" | bin _time span=1mon | stats count AS MonthTotal1 SUM(AMOUNT) AS MonthTotal BY MERCHANT_CODE, SUBMERCHANT_CODE, _time | eval lastMonthStart = relative_time(now(),"-mon@mon") | stats sum(eval(if(_time>=lastMonthStart,MonthTotal,0))) AS 1M_Total sum(eval(if(_time>=lastMonthStart,0, MonthTotal))) AS 3M_Total values(eval(if(_time>=lastMonthStart,MonthTotal1,null()))) AS Transaction sum(eval(if(_time<lastMonthStart,MonthTotal1,null()))) AS THREE_MONTHS BY SUBMERCHANT_CODE, MERCHANT_CODE | eval 3M_Total_avg = round(3M_Total/3,2) | eval RATE_Total = round((1M_Total/3M_Total_avg)*100,2) | search RATE_Total>=200 OR RATE_Total=0 | join MERCHANT_CODE [search index = * | dedup MERCHANT_CODE | table MERCHANT_CODE, BANK] | table MERCHANT_CODE SUBMERCHANT_CODE, BANK, 1M_Total, RATE_Total       It seem complicated but the gist is I have to compare the lastest month total value of transaction to the average of 3 months before it for each sub-merchant, if the rate is >200%, show it in a table. The typical event go like this (I'll omit some unnecessary parts):     2021-10-25 13:52:33 TRANSACTION_ID="144479283"AMOUNT="10000", MERCHANT_TRANSACTION_CODE="17797161285", RESPONSE_CODE="0",MERCHANT_CODE="MOMOCE", SUBMERCHANT_CODE="22312"     Something to note: - Each MERCHANT can have several SUBMERCHANT, or don't have one at all, so the field SUBMERCHANT is not always exist in events. - Each MERCHANT have a BANK associate to it, but in another table.  I have a query just for SUBMERCHANT as a baseline to compare results, but somehow the query above, and even if I use (eventstats) instead of (stats), all show all different results than the baseline.  Does anyone have anyideal to untangle this mess, I'll really appreciate!
Dear Splunk community, In Splunk, I am looking for logs that say "started with profile: [profile name]" and retrieve the profile name from found events. Then I want to use the profile name to look f... See more...
Dear Splunk community, In Splunk, I am looking for logs that say "started with profile: [profile name]" and retrieve the profile name from found events. Then I want to use the profile name to look for other events (from a different source) and if one error or more are found, I would like to let it count as one found error, per platform. To make things more clear I have the following search query (query one):   index="myIndex" "started with profile" BD_L* | table _raw, platform, RUNID | eval Platform=case(searchmatch("LINUX"),"LINUX",searchmatch("AIX"),"AIX",searchmatch("DB2"),"DB2", searchmatch("SQL"),"SQL", searchmatch("WEBSPHERE"),"WEBSPHERE", searchmatch("SYBASE"),"SYBASE", searchmatch("WINDOWS"),"WINDOWS", true(),"ZLINUX") | stats count by Platform | rename count AS "Amount"   The events found from above query contains the following (raw) :   Discovery run, 2021101306351355 started with profile BD_L2_Windows   The above query will return a list of events containing the raw data above and will result in the following table. This is a table with the amount of Discovery runs per platform: Using the following piece of code I can extract RUNID from the events. RUNID is what I need to use in a second search when looking for errors:   | rex "Discovery run, (?.+) started with profile"   Using RUNID I can look for errors (query two):   index="myIndex" source="/*/RUNID/*" CASE("ERROR") CTJT* | dedup _raw | stats count | rename count AS "Amount"   Now, I am looking for a way to combine the above two queries into one and count the amount of platforms that have at least one error. So lets say we have the following simulation: - Two runs (one Windows and one Linux) - Windows run has 0 errors (none found in query 2) - Linux has 6 errors (found in query 2) This should result in the following results:   Platform | Amount Linux | 1   I need to find some way to return true or one from query 2 and use that in query 1 to group the results, but I am unable to due to lack of experience. I have not yet found anything similair to my question and hope anyone here can help me out. Thanks in advance.                    
index=pan* dvc_name="*" sourcetype="pan:traffic" OR sourcetype="pan:system" how can I trigger an email alert if example 1 or multiple devices are not sending traffic logs after 24hrs. I tried using ... See more...
index=pan* dvc_name="*" sourcetype="pan:traffic" OR sourcetype="pan:system" how can I trigger an email alert if example 1 or multiple devices are not sending traffic logs after 24hrs. I tried using the alert with condition number of results but it's not sending logs. because splunk counts the result not by device and by logs it added all the results.
Hello I have a dashboard with 2 panels, in the second one i have a drilldown with link to search i'm trying to configure token but it is not working this is what i tried to do: <init> <set ... See more...
Hello I have a dashboard with 2 panels, in the second one i have a drilldown with link to search i'm trying to configure token but it is not working this is what i tried to do: <init> <set token="TransactionId">$TransactionId$</set> </init> and this is what im getting once clicking on the link: TransactionId=$TransactionId$ what am i missing ? thanks
Hi all, We've configured a Forcepoint Next Generation Firewall (NGFW) to send data through it's Security Management Center (SMC) after following this article: https://forcepoint.github.io/docs/ngfw_... See more...
Hi all, We've configured a Forcepoint Next Generation Firewall (NGFW) to send data through it's Security Management Center (SMC) after following this article: https://forcepoint.github.io/docs/ngfw_and_splunk/, however no data is displayed in the Splunk Enterprise (Standalone) Web UI > Apps > Forcepoint. From a 'tcpdump' on the Splunk Ent. device (hosted on Linux CentOS 7), we can see incoming traffic on configured incoming TCP-19997 port. Could anyone advise please? Kind regards, Lubo
Dear community, I have been trying to integrate splunk for my scripting purpose for some time now and it's time to reach out for some help. Design based on <form> and I have tried to implement this ... See more...
Dear community, I have been trying to integrate splunk for my scripting purpose for some time now and it's time to reach out for some help. Design based on <form> and I have tried to implement this with 2 ways: to run the search with custom python command from the drilldown, not sure how NOT run it automatically and to take those inputs as args: I have 3 input fields: Here is one version of my XML ( in the search "| pullssp" is my python script that requires above inputs) :   <form script="button.js">/*<init><set token="hostname"></set><set token="username"></set><set token="password"></set>*/</init> <label>submit button</label> <fieldset submitButton="false"></fieldset> <row depends="$hide$"> <panel> <html> <style> .btn-search{ color: #fff; padding: 6px 15px; font-weight: 500; background-color: #5cc05c; border: transparent; display: inline-block; height: auto; line-height: 20px; font-size: 14px; box-sizing: border-box; margin-bottom: 0; text-align: center; vertical-align: middle; cursor: pointer; border-radius: 3px; white-space: nowrap; } .btn-search:hover{ background-color: #40a540; border-color: transparent; color: #fff; box-shadow: inset 0 -2px 0 rgba(0,0,0,.1); text-decoration: none; text-shadow: none; filter: none; } </style> </html> </panel> </row> <row> <panel> <input type="text" searchWhenChanged="false" id="host" token="hostname"> <label>Server</label> <default>https://192.168.1.10</default> </input> <input type="text" searchWhenChanged="false" id="user" token="username"> <label>Username</label> <default>admin@admin.com</default> </input> <input type="text" searchWhenChanged="false" id="pass" token="password"> <label>Password</label> <default>Admin123</default> </input> <html> <input type="button" value="Search" id="submit_host" class="btn-search"/> </html> <table> <search> <query>| pullssp $hostname$ $username$ $password$</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </form>   Ideally I would like this not to run automatically just when I submit my inputs with search button. JS button with this version, constantly adjusting as I do not know js:   require([ "jquery", "splunkjs/mvc", "splunkjs/mvc/simplexml/ready!"], function($, mvc) { var defaultTokenModel = mvc.Components.get("submitted"); $( "#submit_" ).click(function() { var hostname= $('#host input[type="text"]').val(); var hostname= $('#user input[type="text"]').val(); var hostname= $('#pass input[type="text"]').val(); defaultTokenModel.set("hostname",hostname); defaultTokenModel.set("username",username); defaultTokenModel.set("password",password); }); });   Here is another way I'm thinking how to try to pass this to js script and run command from there:   <form script="get.js" hideSplunkBar="1" hideFooter="1" hideEdit="0" isDashboard="0"> <label>Update</label> <fieldset submitButton="false" autoRun="false"> <input type="text" token="field1"> <label>Server</label> </input> <input type="text" token="field2"> <label>Username</label> </input> <input type="text" token="field3"> <label>Password</label> </input> </fieldset> <row> <panel> <html> <fieldset submitButton="true"> <button class="btn btn-primary button1"> <span>Update STUFF</span> </button> </fieldset> </html> </panel> </row> </form>   General idea of how js script should reflect action upon button1 click:   require([ "jquery", "splunkjs/mvc/searchmanager", "splunkjs/mvc/simplexml/ready!" ], function( $, SearchManager ) { var mysearch = new SearchManager({ id: "mysearch", autostart: "false", search: "|pullssp $field1$ $field2$ $field3$" }); $(".button1").on("click", function (){ var ok = confirm("Are you sure?"); if (ok){ mysearch.startSearch(); } }); });   How can I use default token model to grab those tokens and pass further to my search to use with python script command please? Could not find any examples on it. @vnravikumarseen couple of your posts and I think you might be able to help ? Many Thanks in advance all
I have configured an automatic lookup, however when I try to do a search it gives a message " Could not load lookup=LOOKUP-auto_prices [subsearch]: Could not load lookup=LOOKUP-auto_prices " Ca... See more...
I have configured an automatic lookup, however when I try to do a search it gives a message " Could not load lookup=LOOKUP-auto_prices [subsearch]: Could not load lookup=LOOKUP-auto_prices " Can someone help me,please?