All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm looking for a query to see my splunk users that havent logged into splunk in x days.  Currently looking at this query: | rest /services/authentication/users splunk_server=local |eval c_time=s... See more...
I'm looking for a query to see my splunk users that havent logged into splunk in x days.  Currently looking at this query: | rest /services/authentication/users splunk_server=local |eval c_time=strftime(last_successful_login,"%m/%d/%y %H:%M:%S") | table title roles last_successful_login c_time   However this shows me all users where I only want to see those that havent logged in in x days.   Any assistance is appreciated
Hello everyone,   I've got a local universal forwarder on an internal network. (all in linux env) My intermediate forwarder and my deploymentServer are in another network zone, behind a dmz. I've... See more...
Hello everyone,   I've got a local universal forwarder on an internal network. (all in linux env) My intermediate forwarder and my deploymentServer are in another network zone, behind a dmz. I've no splunk component in the dmz (my IFW & DS are on the other "side").   I've got only one gateway from my local fw to the dmz on port :80. Then the flow can be rerouted to via proxy or reverse proxy depending on the need, to the target. At least, it's the theory.   I've put in my local forwarder : - an alias my-ds.com:80 in deploymentClient.conf (to be routed "after" the dmz to my-DS.com:8089) - an alias my-uf.com:80 in outputs.conf (to be routed "after' the dmz to my-uf.com:9997) - [proxyConfig] with my-gateway:80 in server.conf I route the 2 aliases with a modification of /etc/hosts   Of course, it does not work. I don't really understand how it is supposed to work. Handshake is always ko for DS, and TCPOutputProc is paused because of the timeout of the targert (so not reacheable also).   Here's my question : am I doing this correctly or am I totally out of it ? I've read the doc : https://docs.splunk.com/Documentation/Splunk/8.1.1/Admin/ConfigureSplunkforproxy but I'm still stuck without being at least sure I'm trying this the right way.   Thank your for your suggestions, Ema  
Hi Everyone, I have one requirement . I have one multiselect like below: <fieldset submitButton="false" autoRun="true"> <input type="multiselect" token="OrgName" searchWhenChanged="true"> <label>... See more...
Hi Everyone, I have one requirement . I have one multiselect like below: <fieldset submitButton="false" autoRun="true"> <input type="multiselect" token="OrgName" searchWhenChanged="true"> <label>Salesforce Org Name</label> <choice value="*">All Salesforce Org</choice> <search> <query>index="abc" sourcetype="xyz" | lookup Org_Alias.csv OrgFolderName OUTPUT OrgName as OrgName | stats count by OrgName</query> <earliest>-30d@d</earliest> <latest>now</latest> </search> <fieldForLabel>OrgName</fieldForLabel> <fieldForValue>OrgName</fieldForValue> <prefix>(</prefix> <valuePrefix>OrgName ="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <suffix>)</suffix> <initialValue>*</initialValue> <default>*</default> </input> My requirement is I want to convert it into drop down . So I converted with below code: <input type="dropdown" token="OrgName" searchWhenChanged="true"> <label>Salesforce Org Name</label> <choice value="*">All Salesforce Org</choice> <search> <query>index="abc" sourcetype="xyz" | lookup Org_Alias.csv OrgFolderName OUTPUT OrgName as OrgName | stats count by OrgName</query> </search> <fieldForLabel>OrgName</fieldForLabel> <fieldForValue>OrgName</fieldForValue> <prefix>OrgName="</prefix> <suffix>"</suffix> <initialValue>OneForce</initialValue> <default>OneForce</default> </input> This is working fine but what I want is " All Salesforce Org" value should come at end. Can someone guide me.
Hello! I have made a splunk dashboard which is really useful. I would like to embed this into my react website and I tried following the steps here: https://securitysynapse.blogspot.com/2019/11/sh... See more...
Hello! I have made a splunk dashboard which is really useful. I would like to embed this into my react website and I tried following the steps here: https://securitysynapse.blogspot.com/2019/11/sharing-splunk-dashboards-outside-of-splunk.html I do not have the option "Embed" when klicking edit on a report. I only have the options here: Perhaps this is a permission issue?  Do you have any suggestions on how I can include a chart or the whole dashboard in a react application? Thanks, BR Jonathan  
Greetings, I've 2 Lookup (csv) files, one generated from index _internal (approx 15k events) and another generated from ldap (approx 20k events) and both files have only the host name field.    Whe... See more...
Greetings, I've 2 Lookup (csv) files, one generated from index _internal (approx 15k events) and another generated from ldap (approx 20k events) and both files have only the host name field.    When running the following SPL the "search NOT" is returning results that should not be included.    At first we thought it was a problem with case-sensitivity but we ruled this out by changing all the host names to upper-case but still the "search NOT" was returning wrong results.    "| inputlookup LDAP_source.csv | search NOT [ | inputlookup INTERNAL_source.csv ]" We've tested using 1k Lookup files without any problems.  Is there perhaps a limit being reached or a known issue with comparing 2 Lookup files using "search NOT"? Thanks in advance for any assistance or insight.  
Hi,  I created my custom input (mytest.conf.tmpl) by coping the /opt/sc4s/local/config/log_paths/lp-example.conf.tmpl. When I send following event to SC4S from port 5144, timestamp is extracted as a... See more...
Hi,  I created my custom input (mytest.conf.tmpl) by coping the /opt/sc4s/local/config/log_paths/lp-example.conf.tmpl. When I send following event to SC4S from port 5144, timestamp is extracted as attach "1/28/21 4:31:30.000 PM" . I see that timestamp is extracted by adding three hours to this (Jan 28 13:21:30 ) However when I read from file mytest123.log, as you can see timestamp is extracted correctly 1:21:27 PM.   props.conf for mytest123.log [sc4s:forcepoint] TIME_PREFIX= \srt= MAX_TIMESTAMP_LOOKAHEAD=15   How can I extract timestamp correctly?  Thanks,   Converted 13 digit epoch time = Thursday, January 28, 2021 1:21:27 PM GMT+03:00 "<13> Jan 28 13:35:04 myhost vendor=myvendor product="My xx Security" version=9.9.9 event=Message dvc=111.111.111.111 dvchost=myhost rt=1611829287000 externalId=999999900000000 messageId=mmmmm suser="abcd@xxx.com" duser="aa.bb@xxxx.com " msg="MY Event""    
Hi, Any up coming update on this TA? Cheers //T
Hello splunkers, Please help me to resolve this issue. I have 39 csv files ingested into splunk in one go and iam expecting 27 alert email notifications. Everytime i am receiving 17 or 20 emails o... See more...
Hello splunkers, Please help me to resolve this issue. I have 39 csv files ingested into splunk in one go and iam expecting 27 alert email notifications. Everytime i am receiving 17 or 20 emails out of 27 triggered alerts and sometime i am receiving all the 27 emails as expected. Could you please help me how to resolve this issue. Is this issue related to the splunk or email server.?   splunk enterprise version 6.6.3
Hi! I have a splunk setup with various indexes for different things. im trying to create a single search which will identify any of my indexes that haven’t received any new data in say the past hou... See more...
Hi! I have a splunk setup with various indexes for different things. im trying to create a single search which will identify any of my indexes that haven’t received any new data in say the past hour or something like that. how could this be done? Thank you!
trying out SC4S - not seeing my syslog come through to Splunk  Installed all running docker - no firewalls or selinux syslog hitting server running sc4s :       tcpdump -i eth0 dst port 514 tcp... See more...
trying out SC4S - not seeing my syslog come through to Splunk  Installed all running docker - no firewalls or selinux syslog hitting server running sc4s :       tcpdump -i eth0 dst port 514 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 00:24:24.962899 IP x.x.x.x.bob.com.38897 > 197-202-166-108-dedicated.multacom.com.syslog: SYSLOG local0.warning, length: 273       docker seems to be running fine - i receive HEC TEST EVENT's and startup events in splunk : sc4s version=v1.47.3 sc4s logs :       [root@bob system]# docker logs SC4S '/etc/syslog-ng/local_config/destinations/README.md' -> '/etc/syslog-ng/conf.d/local/config/destinations/README.md' '/etc/syslog-ng/local_config/filters/README.md' -> '/etc/syslog-ng/conf.d/local/config/filters/README.md' '/etc/syslog-ng/local_config/filters/example.conf' -> '/etc/syslog-ng/conf.d/local/config/filters/example.conf' '/etc/syslog-ng/local_config/log_paths/README.md' -> '/etc/syslog-ng/conf.d/local/config/log_paths/README.md' '/etc/syslog-ng/local_config/log_paths/lp-example.conf.tmpl' -> '/etc/syslog-ng/conf.d/local/config/log_paths/lp-example.conf.tmpl' '/etc/syslog-ng/local_config/sources/README.md' -> '/etc/syslog-ng/conf.d/local/config/sources/README.md' SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful; checking indexes... SC4S_ENV_CHECK_INDEX: Checking email {"text":"Success","code":0} SC4S_ENV_CHECK_INDEX: Checking epav {"text":"Success","code":0} SC4S_ENV_CHECK_INDEX: Checking epintel {"text":"Success","code":0} SC4S_ENV_CHECK_INDEX: Checking epintelexit {"text":"Success","code":0} SC4S_ENV_CHECK_INDEX: Checking fireeye {"text":"Success","code":0} SC4S_ENV_CHECK_INDEX: Checking infraops {"text":"Success","code":0} SC4S_ENV_CHECK_INDEX: Checking main {"text":"Success","code":0} SC4S_ENV_CHECK_INDEX: Checking netauth {"text":"Success","code":0} SC4S_ENV_CHECK_INDEX: Checking netdlp {"text":"Success","code":0} SC4S_ENV_CHECK_INDEX: Checking netdns {"text":"Success","code":0} SC4S_ENV_CHECK_INDEX: Checking netfw {"text":"Success","code":0} SC4S_ENV_CHECK_INDEX: Checking netids {"text":"Success","code":0} SC4S_ENV_CHECK_INDEX: Checking netipam {"text":"Success","code":0} SC4S_ENV_CHECK_INDEX: Checking netops {"text":"Success","code":0} SC4S_ENV_CHECK_INDEX: Checking netproxy {"text":"Success","code":0} SC4S_ENV_CHECK_INDEX: Checking netwaf {"text":"Success","code":0} SC4S_ENV_CHECK_INDEX: Checking osnix {"text":"Success","code":0} SC4S_ENV_CHECK_INDEX: Checking oswin {"text":"Success","code":0} SC4S_ENV_CHECK_INDEX: Checking oswinsec {"text":"Success","code":0} syslog-ng checking config sc4s version=v1.47.3 starting goss starting syslog-ng      
Hi All, Please help me with splunk query to find removed (Off-boarded) hosts & index in splunk
Hi, I have the below query which does the search on two different sources in the same index and join the results based app correlation id to get results and perform the stats operation. However, the... See more...
Hi, I have the below query which does the search on two different sources in the same index and join the results based app correlation id to get results and perform the stats operation. However, the source files are huge and hence the join is taking too longs to get me the results. index=server sourcetype=perfromance source="*performance.log"  component_role=consumer  | join  app_id [ search index=server sourcetype=component source="*component.log" | rename appCorId as app_id ] | stats count(eval=(process_result="COMPLETED")) as Completed count(eval=(process_result="FAILED")) as Failed This is a simple join but taking huge time when do a search for 24 hours. Please help optimize this query. Thanks, Sandeep    
Hi, I am getting  the below table using the query "index=main host="abcde" | rex field=_raw "(?ms)Node\s+Name\s:\s(?<Node_Name>\w+\S+)" | rex field=_raw "(?ms)Node\sState\s:\s(?<Node_State>[\w\s]+\w... See more...
Hi, I am getting  the below table using the query "index=main host="abcde" | rex field=_raw "(?ms)Node\s+Name\s:\s(?<Node_Name>\w+\S+)" | rex field=_raw "(?ms)Node\sState\s:\s(?<Node_State>[\w\s]+\w)\s+Number | eval Result=if(Node_State=="Running",  "Ok",  "NotOk") | table Node_Name,Node_State,Result" Node_Name    Node_State                       Result abc                      Stopped                               NotOk cde                      Running                                Ok abc                      Running                                Ok xyz                       Stopped                               NotOk the                       Running                                Ok abc                      Partially running                NotOk abc                      Stopped                                NotOk xyz                       Running                                Ok the                       Running                                Ok abc                      Running                                Ok Here I want to get the count of "Result=Ok" and count of "Node_State" to calculate the percentage of "Result=Ok". I tried these queries: "..... | search Result=ok | stats count(Result) as Total, count(Node_State) as Total1 | eval Enterprise1=round(Total/Total1*100) | fields - Total,Total1" and "..... | stats count(Result) as Total, count(Node_State) as Total1 | eval Enterprise1=round(Total/Total1*100) | fields - Total,Total1" But I am getting 100% which can't be true as count of "Result=OK" is less than count of the Node_State.  Please help me modify the query in the right way to get the desired result.   Thank You.
Hi, I'm searching through the Registry data model and I noticed that in the field "user" I've got process names. How to fix it? As far as I know, after fixing it - so in the "user" field there is a... See more...
Hi, I'm searching through the Registry data model and I noticed that in the field "user" I've got process names. How to fix it? As far as I know, after fixing it - so in the "user" field there is actually a user name, I will need to rebuild the whole data model, right? Will I need to take some extra steps if this data model is accelerated?
The documentation for Application Protocol list in ES states "The Application Protocols list is a list of port and protocol combinations and their approval status in your organization" and shows fiel... See more...
The documentation for Application Protocol list in ES states "The Application Protocols list is a list of port and protocol combinations and their approval status in your organization" and shows fields available in the file. Field Description dest_port The destination port number. Must be a number from 0 to 65535. transport The protocol of the network traffic. For example, icmp, tcp, or udp. app The name of the application using the port.   But where is the field for approval status ? or am I interpreting it in wrong way ?
Splunk doc says, Expected Views list specifies Splunk Enterprise Security views that are monitored on a regular basis.  But what are these views monitored for ? What do I need to actually use this f... See more...
Splunk doc says, Expected Views list specifies Splunk Enterprise Security views that are monitored on a regular basis.  But what are these views monitored for ? What do I need to actually use this for ? Whats the usecase behind it ?
Hi, I've got an issue with one of my Data Sources where TrackMe falsely detected a Data Sampling anomaly and, consequently, set the state to Red and the Status Message to: Alert: data source status... See more...
Hi, I've got an issue with one of my Data Sources where TrackMe falsely detected a Data Sampling anomaly and, consequently, set the state to Red and the Status Message to: Alert: data source status is red, monitoring conditions are not met due to anomalies detected in the data sampling and format recognition, review the data sampling window to investigate. This alert means that trackMe detected an issue in the format of the events compared to the format that was previsouly identified for this source. I used the 'Clear state and run sampling' from the Data Sampling tab and that is now Green. The Status Flipping tab also shows the object_state as Green but the 'state' ( in the summary at the top) is still Red and the Status message is still the same (as above). However, I've just noticed that the Timeline in the Status Message tab also shows as 'Green'??? How can I drill down into why TrackMe is showing both  a Red and Green status for the same Data Source? Many thanks, Mark.
Hi, I am getting  the below table using the query "index=main host="abcde" | rex field=_raw "(?ms)Node\s+Name\s:\s(?<Node_Name>\w+\S+)" | rex field=_raw "(?ms)Node\sState\s:\s(?<Node_State>[\w\s]+\w... See more...
Hi, I am getting  the below table using the query "index=main host="abcde" | rex field=_raw "(?ms)Node\s+Name\s:\s(?<Node_Name>\w+\S+)" | rex field=_raw "(?ms)Node\sState\s:\s(?<Node_State>[\w\s]+\w)\s+Number | eval Result=if('Node_State'=='Running', "Ok", "NotOk") | table Node_Name,Node_State,Result" Node_Name    Node_State                       Result abc                      Stopped                               NotOk cde                      Running                                NotOk abc                      Running                                NotOk xyz                       Stopped                               NotOk the                       Running                                NotOk abc                      Partially running                NotOk abc                      Stopped                                NotOk xyz                       Running                                NotOk the                       Running                                NotOk abc                      Running                                NotOk   Is there anything wrong with my query in the eval command..? I want the "Result" field to be "Ok" if Running and "NotOk" for any other state. But here it seems not working as expected. Please help modify the query to get the output in desired way. Thank you.
Been testing to get a ISE-Splunk successful authentication report and trying this but the "Calling-Station-ID" is not displaying in table, I can see it exist. index=network eventtype=cisco-ise CISE_... See more...
Been testing to get a ISE-Splunk successful authentication report and trying this but the "Calling-Station-ID" is not displaying in table, I can see it exist. index=network eventtype=cisco-ise CISE_RADIUS_Accounting host=ISEnode1 OR  | eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S") | table indextime Calling-Station-ID   Any help out there? I new with this Splunk search Or anyone got a sample Splunk ISE Authentication report?   TIA
Is there any way to publish Splunk dashboard to a slack channel?