All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to install HF or UF on our DMZ environment. The Indexer is on the LAN. I is not allow to communicate from the DMZ to the LAN . I need that the logs from the DMZ will be pulled to the In... See more...
I want to install HF or UF on our DMZ environment. The Indexer is on the LAN. I is not allow to communicate from the DMZ to the LAN . I need that the logs from the DMZ will be pulled to the Indexer in the LAN (using HF or any other solution). Please share your insight on how to setup this from your experience . Thanks in advance.
I have duplicate notables/alerts coming in for a specific correlation search I created. I'm sure the problem is within the time ranges of the correlation search but I cannot figure out what it is. ... See more...
I have duplicate notables/alerts coming in for a specific correlation search I created. I'm sure the problem is within the time ranges of the correlation search but I cannot figure out what it is. I have this AWS Instance Modified By Usual User search that I got from Security Essentials.   This is my search  index=aws sourcetype=aws:cloudtrail eventName=ConsoleLogin OR eventName=CreateImage OR eventName=AssociateAddress OR eventName=AttachInternetGateway OR eventName=AttachVolume OR eventName=StartInstances OR eventName=StopInstances OR eventName=UpdateService OR eventName=UpdateLoginProfile | stats earliest(_time) as earliest latest(_time) as latest by userIdentity.arn, eventName | eventstats max(latest) as maxlatest | where earliest > relative_time(maxlatest, "-1d@d") | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(earliest) | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(latest) | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(maxlatest)   This is the time range in the search:   Yesterday I logged in to our AWS account for the first time and the alert fired - which is good. But the alert kept firing every hour afterwards. From my understanding the alert should fire once and be over with.   8am, original alert: userIdentity.arn eventName earliest latest maxlatest jim ConsoleLogin 2023-02-08 07:57:11 2023-02-08 07:57:11 2023-02-08 07:57:11   At 9am: userIdentity.arn eventName earliest latest maxlatest bob ConsoleLogin 2023-02-08 08:29:22 2023-02-08 08:29:22 2023-02-08 08:57:55 jim ConsoleLogin 2023-02-08 07:57:11 2023-02-08 07:57:11 2023-02-08 08:57:55   At 10am: userIdentity.arn eventName earliest latest maxlatest bob ConsoleLogin 2023-02-08 08:29:22 2023-02-08 08:29:22 2023-02-08 09:51:21 jim ConsoleLogin 2023-02-08 07:57:11 2023-02-08 09:20:11 2023-02-08 09:51:21   At 11am: userIdentity.arn eventName earliest latest maxlatest bob ConsoleLogin 2023-02-08 08:29:22 2023-02-08 08:29:22 2023-02-08 10:15:26 jim ConsoleLogin 2023-02-08 07:57:11 2023-02-08 09:20:11 2023-02-08 10:15:26   At 12pm: userIdentity.arn eventName earliest latest maxlatest bob ConsoleLogin 2023-02-08 08:29:22 2023-02-08 08:29:22 2023-02-08 11:15:54 jim ConsoleLogin 2023-02-08 07:57:11 2023-02-08 09:20:11 2023-02-08 11:15:54   And what I didn't mention is that, apart from alert repeating itself each hour, there are two alerts created every time due to, I'm assuming, having two users. How can I fix this to alert only once and to also report only once for both users? thanks for any assistance.
Hello all As a splunk in an early station I currently have the following challenge: We have many indexes and we want to do an analysis over all indexes how fast the log data is available in Spl... See more...
Hello all As a splunk in an early station I currently have the following challenge: We have many indexes and we want to do an analysis over all indexes how fast the log data is available in Splunk. As distance should be measured from writing the log (_time) to indexing time (_indextime). Also we want to exclude scatter (e.g. we currently have hosts with wrong time configuration, i.e. something like a Gaussian normal distribution). Here is an example query, which is probably wrong or could be done much better by you: | tstats latest(_time) AS logTime latest(_indextime) AS IndexTime WHERE index=bv* BY _time span=1h | eval delta=IndexTime - logTime | search (delta<1800 AND delta>0) | table _time delta Is the query approx correct so that we can answer the question what kind of deley we have over all? How could one use a Gaussian normal distribution instead of restricting the search manually?
Thanks in advance for any assistance you can please lend.  Can someone please tell me how I can configure an Enterprise Security correlation search that triggers only when a specific action is repe... See more...
Thanks in advance for any assistance you can please lend.  Can someone please tell me how I can configure an Enterprise Security correlation search that triggers only when a specific action is repeated by the same user 20 times in 5 minutes?
Dear All,   I've got a question regarding syslog host facility information which can be send from a Huawei switch to Splunk.  There is a facility command available on the Huawei switches which is... See more...
Dear All,   I've got a question regarding syslog host facility information which can be send from a Huawei switch to Splunk.  There is a facility command available on the Huawei switches which is meant for the possibility to categorize outgoing traffic to a remote syslogserver. I would like to know how to detect the facility information coming from the Huawei switch in Splunk and filter this type of information. Does anyone know how to filter the incoming traffic?    Related information:  Facility >  Specifies a syslog server facility that is used to identify the log information source. You can plan a local value for the log information of a specified device, so that the syslog server can handle received log information based on the parameter    In general there are channel groups and there are different log severity levels from 0 to 7  ( 0 = Emergencies 1= Alert , etc)               emergency   alert   critical   error   warning   notice   info   debug kernel              0       1          2       3         4        5      6       7 user                8       9         10      11        12       13     14      15 mail               16      17         18      19        20       21     22      23 system             24      25         26      27        28       29     30      31 security           32      33         34      35        36       37     38      39 syslog             40      41         42      43        44       45     46      47 lpd                48      49         50      51        52       53     54      55 nntp               56      57         58      59        60       61     62      63 uucp               64      65         66      67        68       69     70      71 time               72      73         74      75        76       77     78      79 security           80      81         82      83        84       85     86      87 ftpd               88      89         90      91        92       93     94      95 ntpd               96      97         98      99       100      101    102     103 logaudit          104     105        106     107       108      109    110     111 logalert          112     113        114     115       116      117    118     119 clock             120     121        122     123       124      125    126     127 local0            128     129        130     131       132      133    134     135 local1            136     137        138     139       140      141    142     143 local2            144     145        146     147       148      149    150     151 local3            152     153        154     155       156      157    158     159 local4            160     161        162     163       164      165    166     167 local5            168     169        170     171       172      173    174     175 local6            176     177        178     179       180      181    182     183 local7            184     185        186     187       188      189    190     191   This facility field can used to easier identify some modules/processes from a device that generates log to your remote server. After the facility parameter is configured with the info-center loghost command, the switch will send the syslog packets containing the modified parameter, such as:   And for example if you mention specific source modules required to send logs to remote server, for that loghost you can modify the facility number contained in the packets, such as: Here the ARP logs will be sent to loghost 1 using the user-defined local0 facility(number 16):   In the capture it will be changed like this: So users can filter the logs with the facility field on the log server that supports the facility field. In the same time, if the remote server allows it, the facility can also be used to store the logs in different places based on this field. So specific logs with different facility fields can be categorized in specific paths for easier tracking. So combined with severity you can better find and track different sources that send syslog packets.   Help would be very appreciated.    Thanks in advance.   Best regards,   Danny 
Hi, I am trying to get a list of workstations trying to connect to malicious DNS using PaloAlto and Windows AD logs. From PaloAlto logs I get the list of malicious domains detected and blocked with... See more...
Hi, I am trying to get a list of workstations trying to connect to malicious DNS using PaloAlto and Windows AD logs. From PaloAlto logs I get the list of malicious domains detected and blocked with the following query index="pan_logs" dns sourcetype="pan:threat" dest_zone=External dest_port=53 action=dropped OR action=blocked vendor_action=sinkhole | dedup file_name | table _time, file_name _time                                    file_name 2023-02-09 11:42:59       d2azal32wgllwk.cloudfront.net  2023-02-09 11:42:19       meeo.it 2023-02-09 11:15:51       iemlfiles4.com   2023-02-09 10:26:42       jingermy.com From AD logs I get the DNS requests. I renamed src_ip because the field also exists in paloalto logs. index="msad" sourcetype="MSAD:NT6:DNS" | eval host_querying=src_ip | table _time, domain, host_querying  Those two queries work fine. _time                                    domain                                                                 host_querying 2023-02-09 12:23:32       media-waw1-1.cdn.whatsapp.net            192.168.20.215  2023-02-09 12:23:32       scontent-otp1-1.xx.fbcdn.net                     8.8.4.4   2023-02-09 12:23:32       scontent-otp1-1.xx.fbcdn.net                     192.168.20.27     2023-02-09 12:23:32       scontent-otp1-1.xx.fbcdn.net                     8.8.4.4   2023-02-09 12:23:32       scontent-otp1-1.xx.fbcdn.net                     192.168.20.27 Now, I would like to extract the culprit of that DNS request. index="pan_logs" dns sourcetype="pan:threat" dest_zone=External dest_port=53 action=dropped OR action=blocked vendor_action=sinkhole | dedup file_name | table _time, file_name | join type=left left=L right=R where L.file_name=R.domain [search index="msad" sourcetype="MSAD:NT6:DNS" | eval host_querying=src_ip | fields domain, host_querying] | table _time,file_name,host_querying The result, empty columns for file_name (domain) and host_querying _time                                    file_name            host_querying 2023-02-09 12:00:06       2023-02-09 11:42:59       Could someone point me out what I am doing wrong in the join statement?  thanks
Hello everyone, I want to change the percentage of pie chart to be integer    I am using this to show the percentage      <option name="charting.chart.showPercent">1</option>   ... See more...
Hello everyone, I want to change the percentage of pie chart to be integer    I am using this to show the percentage      <option name="charting.chart.showPercent">1</option>       and my query is      index="test" sourcetype=csv | chart count by status    
Hello, I have log events that follow this structure: "2023-01-10 09:54:18.566 | ERROR | 1 | GroupManagement| ExceptionHandler | UUID CC22E78A-E62D-4693-8D89-0A54E159DDC5 | hasError | This is the er... See more...
Hello, I have log events that follow this structure: "2023-01-10 09:54:18.566 | ERROR | 1 | GroupManagement| ExceptionHandler | UUID CC22E78A-E62D-4693-8D89-0A54E159DDC5 | hasError | This is the error message " It has leading and trailing quotes, and is delimited with pipe character.  I am having trouble with creating the sourcetype and require some assistance. My biggest issue I think is the fact that I have to remove the leading and trailing quotes so that Splunk does not treat the entire event as one field.  I seem to be able to remove them using the following sourcetype, but it does not then identify the fields: [sourcetype] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 disabled=false FIELD_DELIMITER=| FIELD_NAMES=timestamp,type,num,area,code,uuid,text,message TRUNCATE=20000 TIME_PREFIX=^ TIME_FORMAT=%Y-%m-%d %H:%M:%S,%3N SEDCMD-remove_quotes=s/(?<!,)\"([^\"]*)\"/\1/g   Does anybody have an idea?   Thank you and best regards,   Andrew
Hello everybody, can you please tell where i am making errors? I can't make the https splunk web load with my self signed certificate.  I have a test environment, one Splunk Server where i have exec... See more...
Hello everybody, can you please tell where i am making errors? I can't make the https splunk web load with my self signed certificate.  I have a test environment, one Splunk Server where i have executed the following steps: mkdir $SPLUNK_HOME/etc/auth/mycerts cd $SPLUNK_HOME/etc/auth/mycerts $SPLUNK_HOME/bin/splunk cmd openssl genrsa -aes256 -out CAPK.key 2048 # Root CA private key $SPLUNK_HOME/bin/splunk cmd openssl req -new -key CAPK.key -out CACSR.csr # Root CA signing request # a this point in the Common Name i have tried putting everything, hostname, private ip, localhost, ecc but i doesn't seem to make any difference $SPLUNK_HOME/bin/splunk cmd openssl x509 -req -in CACSR.csr -sha512 -signkey CAPK.key -CAcreateserial -out CACE.pem -days 1095 # my CA certificate $SPLUNK_HOME/bin/splunk cmd openssl genrsa -aes256 -out DEPPK.key 2048 # i have configured the same password for both keys but i doesn't seem to be the problem $SPLUNK_HOME/bin/splunk cmd openssl req -new -key DEPPK.key -out DEPCSR.csr # for the Common Name value i have tried the same things for the CA $SPLUNK_HOME/bin/splunk cmd openssl x509 -req -in DEPCSR.csr -SHA256 -CA CACE.pem -CAkey CAPK.key -CAcreateserial -out DEPCE.pem -days 1095 cat DEPCE.pem DEPPK.key CACE.pem > DEPCEchain.pem # in the /opt/splunk/etc/system/local/web.conf i have written: [settings] enableSplunkWebSSL = true privKeyPath = /opt/splunk/etc/auth/mycerts/DEPPK.key serverCert = /opt/splunk/etc/auth/mycerts/DEPCEchain.pem startwebserver = 1 httpport = 8000 # to see if the connection to the server is going well i use openssl s_client -connect 192.168.1.11:8000 # OR openssl s_client -connect 127.0.0.1:8000 # and it says CONNECTED(00000003) unfortunatly if i try to navigate splunk web on https it doesn't load # i have tried putting the certificates inside /opt/splunk/etc/auth/splunkweb and then colling them in web.conf but nothing happens # this is what is written inside server.conf: [sslConfig] sslRootCAPath = /opt/splunk/etc/auth/mycerts/CertificateAuthorityCertificate.pem sslPassword = $7$7OQ1bcyW5b53gGJ/us2ExVKxerWlcolKjoS1j7pZ05QpmNmIUt7NQw==  I don't know what to try next, i can't find a solution, no matter what i try it won't load on splunk web. Maybe it can help saying that i call https://192.168.1.11:8000/  on the browser. Even tried putting sslPassword inside web.conf with the key password but nothing changed.
Splunkd logs - in universal forwarder I notice,     INFO AutoLoadBalancedConnectionStrategy [XXXXX TcpOutEloop] - After randomization, current is first in the list. Swapping with last item  ... See more...
Splunkd logs - in universal forwarder I notice,     INFO AutoLoadBalancedConnectionStrategy [XXXXX TcpOutEloop] - After randomization, current is first in the list. Swapping with last item   what is this log indicate ?
Hi team, I have a Windows 10 machine sending logs to Splunk Enterprise. For that I opened a port tcp 514. Checking on metrics.log I see the events being delivered to Splunk (the IP for Windows ... See more...
Hi team, I have a Windows 10 machine sending logs to Splunk Enterprise. For that I opened a port tcp 514. Checking on metrics.log I see the events being delivered to Splunk (the IP for Windows 10 is 192.168.2.11) 02-09-2023 08:55:06.031 +0000 INFO Metrics - group=tcpin_connections, 192.168.2.11:49713:514, connectionType=raw, sourcePort=49713, sourceHost=192.168.2.11, sourceIp=192.168.2.11, destPort=514, kb=0.000, _tcp_Bps=0.000, _tcp_KBps=0.000, _tcp_avg_thruput=0.012, _tcp_Kprocessed=339.454, _tcp_eps=0.000, _process_time_ms=0, evt_misc_kBps=0.000, evt_raw_kBps=0.000, evt_fields_kBps=0.000, evt_fn_kBps=0.000, evt_fv_kBps=0.000, evt_fn_str_kBps=0.000, evt_fn_meta_dyn_kBps=0.000, evt_fn_meta_predef_kBps=0.000, evt_fn_meta_str_kBps=0.000, evt_fv_num_kBps=0.000, evt_fv_str_kBps=0.000, evt_fv_predef_kBps=0.000, evt_fv_offlen_kBps=0.000, evt_fv_fp_kBps=0.000 I can see events from yesterday from that machine but today I see nothing. Events are sent on syslog format with message in CEF. So, why I can see yesterday events but not today events even if I see the events getting to Splunk server? Where can I check any log that let me know if something is getting wrong? Thanks in advance
Hello everyone,  I'm new on splunk.  I want to build mini lab splunk with virtual machine. Can someone else can share me if you know :  Do you know where i can buy/use cheap/free resources virtua... See more...
Hello everyone,  I'm new on splunk.  I want to build mini lab splunk with virtual machine. Can someone else can share me if you know :  Do you know where i can buy/use cheap/free resources virtual server that are configurable enough for lab splunk building. I'm plan on building 8 server with roles like that : - 2 forwarder - 3 index - 1 cluster manager - 1 search head - 1 license manager / deployer server / monitoring console.   Hope someone can help. Thanks a lot.   
For KVstore, $Splunk_HOME/etc/system/local/sever.conf was configured to use SSL. However, the following error is occurring and the kvstore process is not starting properly. Regarding the Web UI, we... See more...
For KVstore, $Splunk_HOME/etc/system/local/sever.conf was configured to use SSL. However, the following error is occurring and the kvstore process is not starting properly. Regarding the Web UI, we recognise that there is no problem with the certificate itself, as TLS communication is possible using the same server signature. Splunkd.log ERROR MongodRunner [5072 MongodLogThread] - mongod exited abnormally (exit code 1, status: exited with code 1) - look at mongod.log to investigate. Mongod.log  CONTROL [main] Failed global initialisation: InvalidSSLConfiguration: Could not find private key attached to the Failed global initialisation: InvalidSSLConfiguration: Could not find private key attached to the selected certificate. Please provide information on how to resolve the above issue.
I have created a view with javascript to add search bar and display results as event list. How can I change the format in which the events are displayed? I want to use a userscript to format the way ... See more...
I have created a view with javascript to add search bar and display results as event list. How can I change the format in which the events are displayed? I want to use a userscript to format the way events are displayed
hi Have a large index that contains event logs. Trying to extract usernames of EventID 4648. How can I get this displayed along with the computer name it logged into? Thanks in advance.
For example: i have been hitting the pavement trying to figure out a search query for events that happened between 3:00 and 3:15, my next search should be 3:01 to 3:16 and so on then count all the ... See more...
For example: i have been hitting the pavement trying to figure out a search query for events that happened between 3:00 and 3:15, my next search should be 3:01 to 3:16 and so on then count all the total events that occured in the 15 minutes buckets. thank you guys in advance for any help and suggestions is greatly appreciated.
Hi guys, I was wondering if some one could please give me a hand on this. We have written a custom TA to extract logs from a log source. Log messages example:   INFO 09 Feb 14:31:53 [pool-... See more...
Hi guys, I was wondering if some one could please give me a hand on this. We have written a custom TA to extract logs from a log source. Log messages example:   INFO 09 Feb 14:31:53 [pool-3-thread-1] WebHandlerAPI - Received GET request at /api/monitor/logger from [IP ADDRESS] INFO 09 Feb 14:31:53 [pool-4-thread-1] WebHandlerAPI - Received GET request at /api/monitor/performance from [IP ADDRESS] INFO 09 Feb 14:31:53 [thread_check] threadMonitor - 15 threads running OK   Props.conf   category = Application disabled = false pulldown_type = true TIME_FORMAT = %d %b %T TIME_PREFIX = \s+\w+\s+ MAX_TIMESTAMP_LOOKAHEAD = 20 EXTRACT-pdr_generic = (?:\s|)(?P<level>.*?)\s+(?P<timestamp>\d.*?)\s+\[(?P<message_type>.*?)\]\s+(?P<message>.*?)$   It would be great if someone could please point out which part of the props.conf needs to be improved.    
I need to group by a field where all possible values should be shown in the result. For example, the below snippet groups by interface, but rows can be omitted if the query does not return results... See more...
I need to group by a field where all possible values should be shown in the result. For example, the below snippet groups by interface, but rows can be omitted if the query does not return results for an interface. <search> | stats count(state='success') as count by interface  For example, three interfaces exist. [A, B, C]. The search has no results for C. Output interface      count A              100 B              200 Missing Record C              0 How can any missing records be included?  Any option where a lookup table is not used?
I get logs from a system which has a field that contains names. Lets say Abc.xyz is the name of the field. I have a list of names in a CVS where there are 3 columns: id,name,description. I have alrea... See more...
I get logs from a system which has a field that contains names. Lets say Abc.xyz is the name of the field. I have a list of names in a CVS where there are 3 columns: id,name,description. I have already created lookup table files and definitions. Can someone help me with a query to setup a search query to alert every time any name from test.csv file matches with the names from Abc.xyz field from the logs.
Hi, I have 10 hosts, from this only 3 hosts are reporting to DS and 7 are not reporting. when i searched with _internal i could see only 3 hosts logs are coming in. How to troubleshoot further on... See more...
Hi, I have 10 hosts, from this only 3 hosts are reporting to DS and 7 are not reporting. when i searched with _internal i could see only 3 hosts logs are coming in. How to troubleshoot further on this issue??