All Topics

Top

All Topics

Hello everybody, can you please tell where i am making errors? I can't make the https splunk web load with my self signed certificate.  I have a test environment, one Splunk Server where i have exec... See more...
Hello everybody, can you please tell where i am making errors? I can't make the https splunk web load with my self signed certificate.  I have a test environment, one Splunk Server where i have executed the following steps: mkdir $SPLUNK_HOME/etc/auth/mycerts cd $SPLUNK_HOME/etc/auth/mycerts $SPLUNK_HOME/bin/splunk cmd openssl genrsa -aes256 -out CAPK.key 2048 # Root CA private key $SPLUNK_HOME/bin/splunk cmd openssl req -new -key CAPK.key -out CACSR.csr # Root CA signing request # a this point in the Common Name i have tried putting everything, hostname, private ip, localhost, ecc but i doesn't seem to make any difference $SPLUNK_HOME/bin/splunk cmd openssl x509 -req -in CACSR.csr -sha512 -signkey CAPK.key -CAcreateserial -out CACE.pem -days 1095 # my CA certificate $SPLUNK_HOME/bin/splunk cmd openssl genrsa -aes256 -out DEPPK.key 2048 # i have configured the same password for both keys but i doesn't seem to be the problem $SPLUNK_HOME/bin/splunk cmd openssl req -new -key DEPPK.key -out DEPCSR.csr # for the Common Name value i have tried the same things for the CA $SPLUNK_HOME/bin/splunk cmd openssl x509 -req -in DEPCSR.csr -SHA256 -CA CACE.pem -CAkey CAPK.key -CAcreateserial -out DEPCE.pem -days 1095 cat DEPCE.pem DEPPK.key CACE.pem > DEPCEchain.pem # in the /opt/splunk/etc/system/local/web.conf i have written: [settings] enableSplunkWebSSL = true privKeyPath = /opt/splunk/etc/auth/mycerts/DEPPK.key serverCert = /opt/splunk/etc/auth/mycerts/DEPCEchain.pem startwebserver = 1 httpport = 8000 # to see if the connection to the server is going well i use openssl s_client -connect 192.168.1.11:8000 # OR openssl s_client -connect 127.0.0.1:8000 # and it says CONNECTED(00000003) unfortunatly if i try to navigate splunk web on https it doesn't load # i have tried putting the certificates inside /opt/splunk/etc/auth/splunkweb and then colling them in web.conf but nothing happens # this is what is written inside server.conf: [sslConfig] sslRootCAPath = /opt/splunk/etc/auth/mycerts/CertificateAuthorityCertificate.pem sslPassword = $7$7OQ1bcyW5b53gGJ/us2ExVKxerWlcolKjoS1j7pZ05QpmNmIUt7NQw==  I don't know what to try next, i can't find a solution, no matter what i try it won't load on splunk web. Maybe it can help saying that i call https://192.168.1.11:8000/  on the browser. Even tried putting sslPassword inside web.conf with the key password but nothing changed.
Splunkd logs - in universal forwarder I notice,     INFO AutoLoadBalancedConnectionStrategy [XXXXX TcpOutEloop] - After randomization, current is first in the list. Swapping with last item  ... See more...
Splunkd logs - in universal forwarder I notice,     INFO AutoLoadBalancedConnectionStrategy [XXXXX TcpOutEloop] - After randomization, current is first in the list. Swapping with last item   what is this log indicate ?
Hi team, I have a Windows 10 machine sending logs to Splunk Enterprise. For that I opened a port tcp 514. Checking on metrics.log I see the events being delivered to Splunk (the IP for Windows ... See more...
Hi team, I have a Windows 10 machine sending logs to Splunk Enterprise. For that I opened a port tcp 514. Checking on metrics.log I see the events being delivered to Splunk (the IP for Windows 10 is 192.168.2.11) 02-09-2023 08:55:06.031 +0000 INFO Metrics - group=tcpin_connections, 192.168.2.11:49713:514, connectionType=raw, sourcePort=49713, sourceHost=192.168.2.11, sourceIp=192.168.2.11, destPort=514, kb=0.000, _tcp_Bps=0.000, _tcp_KBps=0.000, _tcp_avg_thruput=0.012, _tcp_Kprocessed=339.454, _tcp_eps=0.000, _process_time_ms=0, evt_misc_kBps=0.000, evt_raw_kBps=0.000, evt_fields_kBps=0.000, evt_fn_kBps=0.000, evt_fv_kBps=0.000, evt_fn_str_kBps=0.000, evt_fn_meta_dyn_kBps=0.000, evt_fn_meta_predef_kBps=0.000, evt_fn_meta_str_kBps=0.000, evt_fv_num_kBps=0.000, evt_fv_str_kBps=0.000, evt_fv_predef_kBps=0.000, evt_fv_offlen_kBps=0.000, evt_fv_fp_kBps=0.000 I can see events from yesterday from that machine but today I see nothing. Events are sent on syslog format with message in CEF. So, why I can see yesterday events but not today events even if I see the events getting to Splunk server? Where can I check any log that let me know if something is getting wrong? Thanks in advance
Hello everyone,  I'm new on splunk.  I want to build mini lab splunk with virtual machine. Can someone else can share me if you know :  Do you know where i can buy/use cheap/free resources virtua... See more...
Hello everyone,  I'm new on splunk.  I want to build mini lab splunk with virtual machine. Can someone else can share me if you know :  Do you know where i can buy/use cheap/free resources virtual server that are configurable enough for lab splunk building. I'm plan on building 8 server with roles like that : - 2 forwarder - 3 index - 1 cluster manager - 1 search head - 1 license manager / deployer server / monitoring console.   Hope someone can help. Thanks a lot.   
For KVstore, $Splunk_HOME/etc/system/local/sever.conf was configured to use SSL. However, the following error is occurring and the kvstore process is not starting properly. Regarding the Web UI, we... See more...
For KVstore, $Splunk_HOME/etc/system/local/sever.conf was configured to use SSL. However, the following error is occurring and the kvstore process is not starting properly. Regarding the Web UI, we recognise that there is no problem with the certificate itself, as TLS communication is possible using the same server signature. Splunkd.log ERROR MongodRunner [5072 MongodLogThread] - mongod exited abnormally (exit code 1, status: exited with code 1) - look at mongod.log to investigate. Mongod.log  CONTROL [main] Failed global initialisation: InvalidSSLConfiguration: Could not find private key attached to the Failed global initialisation: InvalidSSLConfiguration: Could not find private key attached to the selected certificate. Please provide information on how to resolve the above issue.
I have created a view with javascript to add search bar and display results as event list. How can I change the format in which the events are displayed? I want to use a userscript to format the way ... See more...
I have created a view with javascript to add search bar and display results as event list. How can I change the format in which the events are displayed? I want to use a userscript to format the way events are displayed
hi Have a large index that contains event logs. Trying to extract usernames of EventID 4648. How can I get this displayed along with the computer name it logged into? Thanks in advance.
For example: i have been hitting the pavement trying to figure out a search query for events that happened between 3:00 and 3:15, my next search should be 3:01 to 3:16 and so on then count all the ... See more...
For example: i have been hitting the pavement trying to figure out a search query for events that happened between 3:00 and 3:15, my next search should be 3:01 to 3:16 and so on then count all the total events that occured in the 15 minutes buckets. thank you guys in advance for any help and suggestions is greatly appreciated.
Hi guys, I was wondering if some one could please give me a hand on this. We have written a custom TA to extract logs from a log source. Log messages example:   INFO 09 Feb 14:31:53 [pool-... See more...
Hi guys, I was wondering if some one could please give me a hand on this. We have written a custom TA to extract logs from a log source. Log messages example:   INFO 09 Feb 14:31:53 [pool-3-thread-1] WebHandlerAPI - Received GET request at /api/monitor/logger from [IP ADDRESS] INFO 09 Feb 14:31:53 [pool-4-thread-1] WebHandlerAPI - Received GET request at /api/monitor/performance from [IP ADDRESS] INFO 09 Feb 14:31:53 [thread_check] threadMonitor - 15 threads running OK   Props.conf   category = Application disabled = false pulldown_type = true TIME_FORMAT = %d %b %T TIME_PREFIX = \s+\w+\s+ MAX_TIMESTAMP_LOOKAHEAD = 20 EXTRACT-pdr_generic = (?:\s|)(?P<level>.*?)\s+(?P<timestamp>\d.*?)\s+\[(?P<message_type>.*?)\]\s+(?P<message>.*?)$   It would be great if someone could please point out which part of the props.conf needs to be improved.    
I need to group by a field where all possible values should be shown in the result. For example, the below snippet groups by interface, but rows can be omitted if the query does not return results... See more...
I need to group by a field where all possible values should be shown in the result. For example, the below snippet groups by interface, but rows can be omitted if the query does not return results for an interface. <search> | stats count(state='success') as count by interface  For example, three interfaces exist. [A, B, C]. The search has no results for C. Output interface      count A              100 B              200 Missing Record C              0 How can any missing records be included?  Any option where a lookup table is not used?
I get logs from a system which has a field that contains names. Lets say Abc.xyz is the name of the field. I have a list of names in a CVS where there are 3 columns: id,name,description. I have alrea... See more...
I get logs from a system which has a field that contains names. Lets say Abc.xyz is the name of the field. I have a list of names in a CVS where there are 3 columns: id,name,description. I have already created lookup table files and definitions. Can someone help me with a query to setup a search query to alert every time any name from test.csv file matches with the names from Abc.xyz field from the logs.
Hi, I have 10 hosts, from this only 3 hosts are reporting to DS and 7 are not reporting. when i searched with _internal i could see only 3 hosts logs are coming in. How to troubleshoot further on... See more...
Hi, I have 10 hosts, from this only 3 hosts are reporting to DS and 7 are not reporting. when i searched with _internal i could see only 3 hosts logs are coming in. How to troubleshoot further on this issue??
I have a field called folder_path which gives the values as follows. folder_path \Device\XYZ\Users\user_A\AppData\program\Send to OneNote.lnk \Device\RTF\Users\user_B\AppData\prog... See more...
I have a field called folder_path which gives the values as follows. folder_path \Device\XYZ\Users\user_A\AppData\program\Send to OneNote.lnk \Device\RTF\Users\user_B\AppData\program\send to file.Ink   Now I wanted to extract the following fields from the field "folder_path" username file_destination user_A Send to OneNote.lnk user_B send to file.Ink   whereas for extracting username as shown in the example it is extracted after the string "Users\", Simmilarly for extracting file_destination as shown in the example it is extracted after the lastbackslash ?   trying a few ways but couldn't properly extract the fields since it has backslashes.
Hi,  one of indexers stops receiving events as the indexer queue is full. I check the splunkd.log, see lots of error message,  02-09-2023 09:49:02.354 +1100 ERROR pipeline [1807 indexerPipe_1] - ... See more...
Hi,  one of indexers stops receiving events as the indexer queue is full. I check the splunkd.log, see lots of error message,  02-09-2023 09:49:02.354 +1100 ERROR pipeline [1807 indexerPipe_1] - Runtime exception in pipeline=indexerPipe processor=indexer error='Unable to create directory /mnt/splunk_index_hot/_internaldb/db/hot_v1_311115391 because Input/output error' confkey='source::/opt/splunk/var/log/splunk/splunkd.log|host::hostname|splunkd|1456359' 02-09-2023 09:49:02.354 +1100 ERROR pipeline [1807 indexerPipe_1] - Uncaught exception in pipeline execution (indexer) - getting next event Anyone came cross this situation? How to fix it up? Thanks!
I would like to setup the alerts based upon the process running status in a server. We had a couple of process(services) which are running inside a server. To track the status of those process, I wo... See more...
I would like to setup the alerts based upon the process running status in a server. We had a couple of process(services) which are running inside a server. To track the status of those process, I would like to need to setup alerts.  Appreciate if anyone share the possibilities. Thanks in advance.
I am trying to create Splunk classic dashboard to show the metrics of important Splunk errors like crash logs , logs that cause Splunk performance issues . What are the Important Splunk error logs... See more...
I am trying to create Splunk classic dashboard to show the metrics of important Splunk errors like crash logs , logs that cause Splunk performance issues . What are the Important Splunk error logs that need to be monitored In this case and where to find them? 
As I write this I realize that what I want is likely not possible using this method.  I want a fillnull (or similar) to happen before an eval.  The eval is likely not even called if there are no even... See more...
As I write this I realize that what I want is likely not possible using this method.  I want a fillnull (or similar) to happen before an eval.  The eval is likely not even called if there are no events in the timechart span I am looking at.  I want the eval it to return a 1 when there are no events in that span.       This works, but is missing the eval.   index=main sourcetype=iis  cs_host="site1.mysite.com" | timechart span=10s  Max(time_taken) | fillnull value=1 This is what I am using.  It works, except for when no events happen. index=main sourcetype=iis  cs_host="site1.mysite.com" | eval site1_up=if(sc_status=200,1,0) | timechart span=10s  Max(site1_up) This charts a 1 if there was at least one 200 response from site1.mysite.com in the 10s span.  It charts a 0 if there were responses, but none were 200.  If there are no matching events it is probably not even looked at and returns nothing and the chart looks like a 0.  I want a 1 charted if there are no events in that 10s span.        Adding | fillnull value=200 sc_status after the timechart simply shows an extra column of sc_status at 200 in every span (column in the chart).  Putting this before the eval does not work since I believe nothing is done without an event.  It should also only use fillnull (or similar) if no events are in that 10 second span.   I have also tried | append [| makeresults ] without success, but don't completely know how that would work.    Logically this is what I want.  The reasoning for the up/down status is not important since this is simply an example.   For each 10s span in the timechart |eval Site1_up=1 if cs_host=A and at least one sc_status=200 |eval Site1_up=0 if cs_host=A and at no sc_status=200 |eval Site1_up=1 if there are no events matching cs_host=A |eval Site2_up =1 if cs_host=B and at least one cs_method=POST |eval Site2_up =0 if cs_host=B and at no cs_method=POST |eval Site2_up =1 if there are no events matching cs_host=B |eval Site3_up =1 if cs_host=C AND cs_User_Agent=Mozilla and at least one cs_uri_stem=check.asmx |eval Site3_up =0 if cs_host=C AND cs_User_Agent=Mozilla and no cs_uri_stem=check.asmx |eval Site3_up =1 if there are no events matching cs_host=C I am trying to make a chart of the up(1)/down(0) status of various components, some of which are determined by the IIS logs.  Thanks      
I have logs which contain parts like: .. { "profilesCount" : { "120000" : 100 , "120001" : 500 , "110105" : 200 , "totalProfilesCount" : 1057}} .. here the key is accountId and value is the number ... See more...
I have logs which contain parts like: .. { "profilesCount" : { "120000" : 100 , "120001" : 500 , "110105" : 200 , "totalProfilesCount" : 1057}} .. here the key is accountId and value is the number of profiles in it. when I use max_count=0 in rex and extract these values I get: accountId=[12000000, 12000001, 11001005] and pCount=[100, 500, 200] for this example event. Since these accountIds are not mapped to their corresponding pCount when I visualize them I get accountId pCount 12000000 100 500 200 12000001 100 500 200 11001005 100 500 200 how can I map them correctly and show in a table form? This was my search query: search <search_logic> | rex max_match=0 "\"(?<account>\d{8})\" : (?<pCount>\d+)"] | stats values(pCount) by account Thanks in advance
I have a lookup with a field called IP. The field has values that have multiple IPs in them an I would like to sperate them out each into their own field. Some IPs are separated by colons and some ar... See more...
I have a lookup with a field called IP. The field has values that have multiple IPs in them an I would like to sperate them out each into their own field. Some IPs are separated by colons and some are separated by semicolons, and some fields have 3+ IPs. Regardless, I need the IPs in the field beyond the first one to be in their own Field column named IP2 and IP3, etc.  What I have: IP 1.1.1.1,2.2.2.2  or  IP  1.1.1.1;2.2.2.2 I've tried something like the below but the makemv only seems to work for the "," and the seperated IPs still show up in the original IP field.  | makemv delim=";" allowempty=true IP | makemv delim="," allowempty=true IP | mvexpand IP
Hi Team, I have a requirement to build a metrics report with below conditions Similar report for 3 different teams (each should not access the other) Underlying data (within the index) may co... See more...
Hi Team, I have a requirement to build a metrics report with below conditions Similar report for 3 different teams (each should not access the other) Underlying data (within the index) may contain sensitive information. So, only the report should be accessible but not the entire index data Metrics of 3 different teams are present in the same index and sourcetype Should be flexible to include extra information (in future) within the report (for history as well) With these requirements, I thought of below solutions but could not meet all requirements. Embedded reports Runs only for specific scheduled time range. So, no flexibility in selecting different time ranges. Summary Indexing Need to create separate summary indexes (per team) and create a report/dashboard using summary index but adding extra information for history metrics is difficult (that's my perception, correct me if I am wrong!) Creating datasets We can create separate datasets with one single root search but not sure how access controls should be with datasets. Please enlighten! Do we have any other better solution please? OR Do you feel one among the above solution would be better to meet my requirements? Any suggestions are welcome!