All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am trying to get a list of workstations trying to connect to malicious DNS using PaloAlto and SYSMON logs. From PaloAlto logs I get the list of malicious domains detected and blocked with the... See more...
Hi, I am trying to get a list of workstations trying to connect to malicious DNS using PaloAlto and SYSMON logs. From PaloAlto logs I get the list of malicious domains detected and blocked with the following query and I do a join statement looking for each malicious domain a DNS request entry in the sysmon log. The query    index="pan_logs" dns sourcetype="pan:threat" dest_zone=External dest_port=53 vendor_action=sinkhole (action=dropped OR action=blocked) | dedup _time,file_name | table _time file_name | rename file_name as QueryName | join QueryName [ search index=sysmon EventID=22 | eval host_querying=Computer | table QueryName, host_querying] | table _time QueryName host_querying   my issue comes when there are several computers accessing to the same malicious domain. The first occurrence found in the sysmon index is assigned to all the requests. I would like to join based on the domain and a time limit between correlated events. is it possible to do this?
Is there  a setting that stops the "AutomIatic lifetime extensions"  (https://docs.splunk.com/Documentation/Splunk/9.0.3/Search/Extendjoblifetimes) of scheduled searches? We have some dashboard wit... See more...
Is there  a setting that stops the "AutomIatic lifetime extensions"  (https://docs.splunk.com/Documentation/Splunk/9.0.3/Search/Extendjoblifetimes) of scheduled searches? We have some dashboard with scheduled searches , refreshed every x time, that are mainly used during business hours. so we scheduled them for only the business hours.  Sometime someones need to look at the dash in the night.  To be sure, they see a recent result we changed the dispatch.ttl too 600 (normally its run every 5 minutes) . So if the last schedule would be @6 pm, the results should be gone @6:10 pm .   If someone opens the dash @ 8pm, there aren't any results, and the search should run again. Now the real problem, some people aren't closing the dashboard. So the last run (that was @ 6 pm) keeps getting extended!   So if someone else needs to take a look at the dashboard it's displaying the last run! So is there a setting to force the results to be deleted after x period?        
Extension setup information here and here reference additional notes for the APM Machine Agent Installation Scenario. This link is broken. If it still exists, how do I install extensions for APM Mach... See more...
Extension setup information here and here reference additional notes for the APM Machine Agent Installation Scenario. This link is broken. If it still exists, how do I install extensions for APM Machine Agents and where is the instructions?
So far I can get the hosts and forwarder version, but I am unable to get the index the forwarders belong to: index="_internal" source="*metrics.lo*" group=tcpin_connections fwdType=uf | dedup hostn... See more...
So far I can get the hosts and forwarder version, but I am unable to get the index the forwarders belong to: index="_internal" source="*metrics.lo*" group=tcpin_connections fwdType=uf | dedup hostname| table hostname,version,os How can I tie the above with the index that the hosts belong to?  
Hi, I have a use case where in i want to find out how many download api failed for a given document and how many out of the failed were successful after subsequent call I have no clue how to sear... See more...
Hi, I have a use case where in i want to find out how many download api failed for a given document and how many out of the failed were successful after subsequent call I have no clue how to search this on splunk right now I am finding the failed ones using the below query  index=ty_ss “download/docIds?=“ “500”  | Rex “docId=(?<docId>.*)” | eval event_time = strftime() | table docIds, event_time
I am trying to monitor drop in events per index. What is the best way to get a baseline and detect deviation to the volume? I am more interesting in drop in events and not increase.
data stopped coming from vcenter to splunk. not sure which DCN is used to configure those Vcenter, could you please help for troubleshooting like how to  check for the error (which cause data to st... See more...
data stopped coming from vcenter to splunk. not sure which DCN is used to configure those Vcenter, could you please help for troubleshooting like how to  check for the error (which cause data to stopped coming). as well as how I can find out the DCN which is using to collect the data from Vcenter. 
We re migrating splunk from our own AWS environment to Splunk SaaS. I am wondering if someone has steps and a tentative effort it requires, as that can help me in creating  a project plan.  Thank You
I want to install HF or UF on our DMZ environment. The Indexer is on the LAN. I is not allow to communicate from the DMZ to the LAN . I need that the logs from the DMZ will be pulled to the In... See more...
I want to install HF or UF on our DMZ environment. The Indexer is on the LAN. I is not allow to communicate from the DMZ to the LAN . I need that the logs from the DMZ will be pulled to the Indexer in the LAN (using HF or any other solution). Please share your insight on how to setup this from your experience . Thanks in advance.
I have duplicate notables/alerts coming in for a specific correlation search I created. I'm sure the problem is within the time ranges of the correlation search but I cannot figure out what it is. ... See more...
I have duplicate notables/alerts coming in for a specific correlation search I created. I'm sure the problem is within the time ranges of the correlation search but I cannot figure out what it is. I have this AWS Instance Modified By Usual User search that I got from Security Essentials.   This is my search  index=aws sourcetype=aws:cloudtrail eventName=ConsoleLogin OR eventName=CreateImage OR eventName=AssociateAddress OR eventName=AttachInternetGateway OR eventName=AttachVolume OR eventName=StartInstances OR eventName=StopInstances OR eventName=UpdateService OR eventName=UpdateLoginProfile | stats earliest(_time) as earliest latest(_time) as latest by userIdentity.arn, eventName | eventstats max(latest) as maxlatest | where earliest > relative_time(maxlatest, "-1d@d") | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(earliest) | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(latest) | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(maxlatest)   This is the time range in the search:   Yesterday I logged in to our AWS account for the first time and the alert fired - which is good. But the alert kept firing every hour afterwards. From my understanding the alert should fire once and be over with.   8am, original alert: userIdentity.arn eventName earliest latest maxlatest jim ConsoleLogin 2023-02-08 07:57:11 2023-02-08 07:57:11 2023-02-08 07:57:11   At 9am: userIdentity.arn eventName earliest latest maxlatest bob ConsoleLogin 2023-02-08 08:29:22 2023-02-08 08:29:22 2023-02-08 08:57:55 jim ConsoleLogin 2023-02-08 07:57:11 2023-02-08 07:57:11 2023-02-08 08:57:55   At 10am: userIdentity.arn eventName earliest latest maxlatest bob ConsoleLogin 2023-02-08 08:29:22 2023-02-08 08:29:22 2023-02-08 09:51:21 jim ConsoleLogin 2023-02-08 07:57:11 2023-02-08 09:20:11 2023-02-08 09:51:21   At 11am: userIdentity.arn eventName earliest latest maxlatest bob ConsoleLogin 2023-02-08 08:29:22 2023-02-08 08:29:22 2023-02-08 10:15:26 jim ConsoleLogin 2023-02-08 07:57:11 2023-02-08 09:20:11 2023-02-08 10:15:26   At 12pm: userIdentity.arn eventName earliest latest maxlatest bob ConsoleLogin 2023-02-08 08:29:22 2023-02-08 08:29:22 2023-02-08 11:15:54 jim ConsoleLogin 2023-02-08 07:57:11 2023-02-08 09:20:11 2023-02-08 11:15:54   And what I didn't mention is that, apart from alert repeating itself each hour, there are two alerts created every time due to, I'm assuming, having two users. How can I fix this to alert only once and to also report only once for both users? thanks for any assistance.
Hello all As a splunk in an early station I currently have the following challenge: We have many indexes and we want to do an analysis over all indexes how fast the log data is available in Spl... See more...
Hello all As a splunk in an early station I currently have the following challenge: We have many indexes and we want to do an analysis over all indexes how fast the log data is available in Splunk. As distance should be measured from writing the log (_time) to indexing time (_indextime). Also we want to exclude scatter (e.g. we currently have hosts with wrong time configuration, i.e. something like a Gaussian normal distribution). Here is an example query, which is probably wrong or could be done much better by you: | tstats latest(_time) AS logTime latest(_indextime) AS IndexTime WHERE index=bv* BY _time span=1h | eval delta=IndexTime - logTime | search (delta<1800 AND delta>0) | table _time delta Is the query approx correct so that we can answer the question what kind of deley we have over all? How could one use a Gaussian normal distribution instead of restricting the search manually?
Thanks in advance for any assistance you can please lend.  Can someone please tell me how I can configure an Enterprise Security correlation search that triggers only when a specific action is repe... See more...
Thanks in advance for any assistance you can please lend.  Can someone please tell me how I can configure an Enterprise Security correlation search that triggers only when a specific action is repeated by the same user 20 times in 5 minutes?
Dear All,   I've got a question regarding syslog host facility information which can be send from a Huawei switch to Splunk.  There is a facility command available on the Huawei switches which is... See more...
Dear All,   I've got a question regarding syslog host facility information which can be send from a Huawei switch to Splunk.  There is a facility command available on the Huawei switches which is meant for the possibility to categorize outgoing traffic to a remote syslogserver. I would like to know how to detect the facility information coming from the Huawei switch in Splunk and filter this type of information. Does anyone know how to filter the incoming traffic?    Related information:  Facility >  Specifies a syslog server facility that is used to identify the log information source. You can plan a local value for the log information of a specified device, so that the syslog server can handle received log information based on the parameter    In general there are channel groups and there are different log severity levels from 0 to 7  ( 0 = Emergencies 1= Alert , etc)               emergency   alert   critical   error   warning   notice   info   debug kernel              0       1          2       3         4        5      6       7 user                8       9         10      11        12       13     14      15 mail               16      17         18      19        20       21     22      23 system             24      25         26      27        28       29     30      31 security           32      33         34      35        36       37     38      39 syslog             40      41         42      43        44       45     46      47 lpd                48      49         50      51        52       53     54      55 nntp               56      57         58      59        60       61     62      63 uucp               64      65         66      67        68       69     70      71 time               72      73         74      75        76       77     78      79 security           80      81         82      83        84       85     86      87 ftpd               88      89         90      91        92       93     94      95 ntpd               96      97         98      99       100      101    102     103 logaudit          104     105        106     107       108      109    110     111 logalert          112     113        114     115       116      117    118     119 clock             120     121        122     123       124      125    126     127 local0            128     129        130     131       132      133    134     135 local1            136     137        138     139       140      141    142     143 local2            144     145        146     147       148      149    150     151 local3            152     153        154     155       156      157    158     159 local4            160     161        162     163       164      165    166     167 local5            168     169        170     171       172      173    174     175 local6            176     177        178     179       180      181    182     183 local7            184     185        186     187       188      189    190     191   This facility field can used to easier identify some modules/processes from a device that generates log to your remote server. After the facility parameter is configured with the info-center loghost command, the switch will send the syslog packets containing the modified parameter, such as:   And for example if you mention specific source modules required to send logs to remote server, for that loghost you can modify the facility number contained in the packets, such as: Here the ARP logs will be sent to loghost 1 using the user-defined local0 facility(number 16):   In the capture it will be changed like this: So users can filter the logs with the facility field on the log server that supports the facility field. In the same time, if the remote server allows it, the facility can also be used to store the logs in different places based on this field. So specific logs with different facility fields can be categorized in specific paths for easier tracking. So combined with severity you can better find and track different sources that send syslog packets.   Help would be very appreciated.    Thanks in advance.   Best regards,   Danny 
Hi, I am trying to get a list of workstations trying to connect to malicious DNS using PaloAlto and Windows AD logs. From PaloAlto logs I get the list of malicious domains detected and blocked with... See more...
Hi, I am trying to get a list of workstations trying to connect to malicious DNS using PaloAlto and Windows AD logs. From PaloAlto logs I get the list of malicious domains detected and blocked with the following query index="pan_logs" dns sourcetype="pan:threat" dest_zone=External dest_port=53 action=dropped OR action=blocked vendor_action=sinkhole | dedup file_name | table _time, file_name _time                                    file_name 2023-02-09 11:42:59       d2azal32wgllwk.cloudfront.net  2023-02-09 11:42:19       meeo.it 2023-02-09 11:15:51       iemlfiles4.com   2023-02-09 10:26:42       jingermy.com From AD logs I get the DNS requests. I renamed src_ip because the field also exists in paloalto logs. index="msad" sourcetype="MSAD:NT6:DNS" | eval host_querying=src_ip | table _time, domain, host_querying  Those two queries work fine. _time                                    domain                                                                 host_querying 2023-02-09 12:23:32       media-waw1-1.cdn.whatsapp.net            192.168.20.215  2023-02-09 12:23:32       scontent-otp1-1.xx.fbcdn.net                     8.8.4.4   2023-02-09 12:23:32       scontent-otp1-1.xx.fbcdn.net                     192.168.20.27     2023-02-09 12:23:32       scontent-otp1-1.xx.fbcdn.net                     8.8.4.4   2023-02-09 12:23:32       scontent-otp1-1.xx.fbcdn.net                     192.168.20.27 Now, I would like to extract the culprit of that DNS request. index="pan_logs" dns sourcetype="pan:threat" dest_zone=External dest_port=53 action=dropped OR action=blocked vendor_action=sinkhole | dedup file_name | table _time, file_name | join type=left left=L right=R where L.file_name=R.domain [search index="msad" sourcetype="MSAD:NT6:DNS" | eval host_querying=src_ip | fields domain, host_querying] | table _time,file_name,host_querying The result, empty columns for file_name (domain) and host_querying _time                                    file_name            host_querying 2023-02-09 12:00:06       2023-02-09 11:42:59       Could someone point me out what I am doing wrong in the join statement?  thanks
Hello everyone, I want to change the percentage of pie chart to be integer    I am using this to show the percentage      <option name="charting.chart.showPercent">1</option>   ... See more...
Hello everyone, I want to change the percentage of pie chart to be integer    I am using this to show the percentage      <option name="charting.chart.showPercent">1</option>       and my query is      index="test" sourcetype=csv | chart count by status    
Hello, I have log events that follow this structure: "2023-01-10 09:54:18.566 | ERROR | 1 | GroupManagement| ExceptionHandler | UUID CC22E78A-E62D-4693-8D89-0A54E159DDC5 | hasError | This is the er... See more...
Hello, I have log events that follow this structure: "2023-01-10 09:54:18.566 | ERROR | 1 | GroupManagement| ExceptionHandler | UUID CC22E78A-E62D-4693-8D89-0A54E159DDC5 | hasError | This is the error message " It has leading and trailing quotes, and is delimited with pipe character.  I am having trouble with creating the sourcetype and require some assistance. My biggest issue I think is the fact that I have to remove the leading and trailing quotes so that Splunk does not treat the entire event as one field.  I seem to be able to remove them using the following sourcetype, but it does not then identify the fields: [sourcetype] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 disabled=false FIELD_DELIMITER=| FIELD_NAMES=timestamp,type,num,area,code,uuid,text,message TRUNCATE=20000 TIME_PREFIX=^ TIME_FORMAT=%Y-%m-%d %H:%M:%S,%3N SEDCMD-remove_quotes=s/(?<!,)\"([^\"]*)\"/\1/g   Does anybody have an idea?   Thank you and best regards,   Andrew
Hello everybody, can you please tell where i am making errors? I can't make the https splunk web load with my self signed certificate.  I have a test environment, one Splunk Server where i have exec... See more...
Hello everybody, can you please tell where i am making errors? I can't make the https splunk web load with my self signed certificate.  I have a test environment, one Splunk Server where i have executed the following steps: mkdir $SPLUNK_HOME/etc/auth/mycerts cd $SPLUNK_HOME/etc/auth/mycerts $SPLUNK_HOME/bin/splunk cmd openssl genrsa -aes256 -out CAPK.key 2048 # Root CA private key $SPLUNK_HOME/bin/splunk cmd openssl req -new -key CAPK.key -out CACSR.csr # Root CA signing request # a this point in the Common Name i have tried putting everything, hostname, private ip, localhost, ecc but i doesn't seem to make any difference $SPLUNK_HOME/bin/splunk cmd openssl x509 -req -in CACSR.csr -sha512 -signkey CAPK.key -CAcreateserial -out CACE.pem -days 1095 # my CA certificate $SPLUNK_HOME/bin/splunk cmd openssl genrsa -aes256 -out DEPPK.key 2048 # i have configured the same password for both keys but i doesn't seem to be the problem $SPLUNK_HOME/bin/splunk cmd openssl req -new -key DEPPK.key -out DEPCSR.csr # for the Common Name value i have tried the same things for the CA $SPLUNK_HOME/bin/splunk cmd openssl x509 -req -in DEPCSR.csr -SHA256 -CA CACE.pem -CAkey CAPK.key -CAcreateserial -out DEPCE.pem -days 1095 cat DEPCE.pem DEPPK.key CACE.pem > DEPCEchain.pem # in the /opt/splunk/etc/system/local/web.conf i have written: [settings] enableSplunkWebSSL = true privKeyPath = /opt/splunk/etc/auth/mycerts/DEPPK.key serverCert = /opt/splunk/etc/auth/mycerts/DEPCEchain.pem startwebserver = 1 httpport = 8000 # to see if the connection to the server is going well i use openssl s_client -connect 192.168.1.11:8000 # OR openssl s_client -connect 127.0.0.1:8000 # and it says CONNECTED(00000003) unfortunatly if i try to navigate splunk web on https it doesn't load # i have tried putting the certificates inside /opt/splunk/etc/auth/splunkweb and then colling them in web.conf but nothing happens # this is what is written inside server.conf: [sslConfig] sslRootCAPath = /opt/splunk/etc/auth/mycerts/CertificateAuthorityCertificate.pem sslPassword = $7$7OQ1bcyW5b53gGJ/us2ExVKxerWlcolKjoS1j7pZ05QpmNmIUt7NQw==  I don't know what to try next, i can't find a solution, no matter what i try it won't load on splunk web. Maybe it can help saying that i call https://192.168.1.11:8000/  on the browser. Even tried putting sslPassword inside web.conf with the key password but nothing changed.
Splunkd logs - in universal forwarder I notice,     INFO AutoLoadBalancedConnectionStrategy [XXXXX TcpOutEloop] - After randomization, current is first in the list. Swapping with last item  ... See more...
Splunkd logs - in universal forwarder I notice,     INFO AutoLoadBalancedConnectionStrategy [XXXXX TcpOutEloop] - After randomization, current is first in the list. Swapping with last item   what is this log indicate ?
Hi team, I have a Windows 10 machine sending logs to Splunk Enterprise. For that I opened a port tcp 514. Checking on metrics.log I see the events being delivered to Splunk (the IP for Windows ... See more...
Hi team, I have a Windows 10 machine sending logs to Splunk Enterprise. For that I opened a port tcp 514. Checking on metrics.log I see the events being delivered to Splunk (the IP for Windows 10 is 192.168.2.11) 02-09-2023 08:55:06.031 +0000 INFO Metrics - group=tcpin_connections, 192.168.2.11:49713:514, connectionType=raw, sourcePort=49713, sourceHost=192.168.2.11, sourceIp=192.168.2.11, destPort=514, kb=0.000, _tcp_Bps=0.000, _tcp_KBps=0.000, _tcp_avg_thruput=0.012, _tcp_Kprocessed=339.454, _tcp_eps=0.000, _process_time_ms=0, evt_misc_kBps=0.000, evt_raw_kBps=0.000, evt_fields_kBps=0.000, evt_fn_kBps=0.000, evt_fv_kBps=0.000, evt_fn_str_kBps=0.000, evt_fn_meta_dyn_kBps=0.000, evt_fn_meta_predef_kBps=0.000, evt_fn_meta_str_kBps=0.000, evt_fv_num_kBps=0.000, evt_fv_str_kBps=0.000, evt_fv_predef_kBps=0.000, evt_fv_offlen_kBps=0.000, evt_fv_fp_kBps=0.000 I can see events from yesterday from that machine but today I see nothing. Events are sent on syslog format with message in CEF. So, why I can see yesterday events but not today events even if I see the events getting to Splunk server? Where can I check any log that let me know if something is getting wrong? Thanks in advance
Hello everyone,  I'm new on splunk.  I want to build mini lab splunk with virtual machine. Can someone else can share me if you know :  Do you know where i can buy/use cheap/free resources virtua... See more...
Hello everyone,  I'm new on splunk.  I want to build mini lab splunk with virtual machine. Can someone else can share me if you know :  Do you know where i can buy/use cheap/free resources virtual server that are configurable enough for lab splunk building. I'm plan on building 8 server with roles like that : - 2 forwarder - 3 index - 1 cluster manager - 1 search head - 1 license manager / deployer server / monitoring console.   Hope someone can help. Thanks a lot.