All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

For KVstore, $Splunk_HOME/etc/system/local/sever.conf was configured to use SSL. However, the following error is occurring and the kvstore process is not starting properly. Regarding the Web UI, we... See more...
For KVstore, $Splunk_HOME/etc/system/local/sever.conf was configured to use SSL. However, the following error is occurring and the kvstore process is not starting properly. Regarding the Web UI, we recognise that there is no problem with the certificate itself, as TLS communication is possible using the same server signature. Splunkd.log ERROR MongodRunner [5072 MongodLogThread] - mongod exited abnormally (exit code 1, status: exited with code 1) - look at mongod.log to investigate. Mongod.log  CONTROL [main] Failed global initialisation: InvalidSSLConfiguration: Could not find private key attached to the Failed global initialisation: InvalidSSLConfiguration: Could not find private key attached to the selected certificate. Please provide information on how to resolve the above issue.
I have created a view with javascript to add search bar and display results as event list. How can I change the format in which the events are displayed? I want to use a userscript to format the way ... See more...
I have created a view with javascript to add search bar and display results as event list. How can I change the format in which the events are displayed? I want to use a userscript to format the way events are displayed
hi Have a large index that contains event logs. Trying to extract usernames of EventID 4648. How can I get this displayed along with the computer name it logged into? Thanks in advance.
For example: i have been hitting the pavement trying to figure out a search query for events that happened between 3:00 and 3:15, my next search should be 3:01 to 3:16 and so on then count all the ... See more...
For example: i have been hitting the pavement trying to figure out a search query for events that happened between 3:00 and 3:15, my next search should be 3:01 to 3:16 and so on then count all the total events that occured in the 15 minutes buckets. thank you guys in advance for any help and suggestions is greatly appreciated.
Hi guys, I was wondering if some one could please give me a hand on this. We have written a custom TA to extract logs from a log source. Log messages example:   INFO 09 Feb 14:31:53 [pool-... See more...
Hi guys, I was wondering if some one could please give me a hand on this. We have written a custom TA to extract logs from a log source. Log messages example:   INFO 09 Feb 14:31:53 [pool-3-thread-1] WebHandlerAPI - Received GET request at /api/monitor/logger from [IP ADDRESS] INFO 09 Feb 14:31:53 [pool-4-thread-1] WebHandlerAPI - Received GET request at /api/monitor/performance from [IP ADDRESS] INFO 09 Feb 14:31:53 [thread_check] threadMonitor - 15 threads running OK   Props.conf   category = Application disabled = false pulldown_type = true TIME_FORMAT = %d %b %T TIME_PREFIX = \s+\w+\s+ MAX_TIMESTAMP_LOOKAHEAD = 20 EXTRACT-pdr_generic = (?:\s|)(?P<level>.*?)\s+(?P<timestamp>\d.*?)\s+\[(?P<message_type>.*?)\]\s+(?P<message>.*?)$   It would be great if someone could please point out which part of the props.conf needs to be improved.    
I need to group by a field where all possible values should be shown in the result. For example, the below snippet groups by interface, but rows can be omitted if the query does not return results... See more...
I need to group by a field where all possible values should be shown in the result. For example, the below snippet groups by interface, but rows can be omitted if the query does not return results for an interface. <search> | stats count(state='success') as count by interface  For example, three interfaces exist. [A, B, C]. The search has no results for C. Output interface      count A              100 B              200 Missing Record C              0 How can any missing records be included?  Any option where a lookup table is not used?
I get logs from a system which has a field that contains names. Lets say Abc.xyz is the name of the field. I have a list of names in a CVS where there are 3 columns: id,name,description. I have alrea... See more...
I get logs from a system which has a field that contains names. Lets say Abc.xyz is the name of the field. I have a list of names in a CVS where there are 3 columns: id,name,description. I have already created lookup table files and definitions. Can someone help me with a query to setup a search query to alert every time any name from test.csv file matches with the names from Abc.xyz field from the logs.
Hi, I have 10 hosts, from this only 3 hosts are reporting to DS and 7 are not reporting. when i searched with _internal i could see only 3 hosts logs are coming in. How to troubleshoot further on... See more...
Hi, I have 10 hosts, from this only 3 hosts are reporting to DS and 7 are not reporting. when i searched with _internal i could see only 3 hosts logs are coming in. How to troubleshoot further on this issue??
I have a field called folder_path which gives the values as follows. folder_path \Device\XYZ\Users\user_A\AppData\program\Send to OneNote.lnk \Device\RTF\Users\user_B\AppData\prog... See more...
I have a field called folder_path which gives the values as follows. folder_path \Device\XYZ\Users\user_A\AppData\program\Send to OneNote.lnk \Device\RTF\Users\user_B\AppData\program\send to file.Ink   Now I wanted to extract the following fields from the field "folder_path" username file_destination user_A Send to OneNote.lnk user_B send to file.Ink   whereas for extracting username as shown in the example it is extracted after the string "Users\", Simmilarly for extracting file_destination as shown in the example it is extracted after the lastbackslash ?   trying a few ways but couldn't properly extract the fields since it has backslashes.
Hi,  one of indexers stops receiving events as the indexer queue is full. I check the splunkd.log, see lots of error message,  02-09-2023 09:49:02.354 +1100 ERROR pipeline [1807 indexerPipe_1] - ... See more...
Hi,  one of indexers stops receiving events as the indexer queue is full. I check the splunkd.log, see lots of error message,  02-09-2023 09:49:02.354 +1100 ERROR pipeline [1807 indexerPipe_1] - Runtime exception in pipeline=indexerPipe processor=indexer error='Unable to create directory /mnt/splunk_index_hot/_internaldb/db/hot_v1_311115391 because Input/output error' confkey='source::/opt/splunk/var/log/splunk/splunkd.log|host::hostname|splunkd|1456359' 02-09-2023 09:49:02.354 +1100 ERROR pipeline [1807 indexerPipe_1] - Uncaught exception in pipeline execution (indexer) - getting next event Anyone came cross this situation? How to fix it up? Thanks!
I would like to setup the alerts based upon the process running status in a server. We had a couple of process(services) which are running inside a server. To track the status of those process, I wo... See more...
I would like to setup the alerts based upon the process running status in a server. We had a couple of process(services) which are running inside a server. To track the status of those process, I would like to need to setup alerts.  Appreciate if anyone share the possibilities. Thanks in advance.
I am trying to create Splunk classic dashboard to show the metrics of important Splunk errors like crash logs , logs that cause Splunk performance issues . What are the Important Splunk error logs... See more...
I am trying to create Splunk classic dashboard to show the metrics of important Splunk errors like crash logs , logs that cause Splunk performance issues . What are the Important Splunk error logs that need to be monitored In this case and where to find them? 
As I write this I realize that what I want is likely not possible using this method.  I want a fillnull (or similar) to happen before an eval.  The eval is likely not even called if there are no even... See more...
As I write this I realize that what I want is likely not possible using this method.  I want a fillnull (or similar) to happen before an eval.  The eval is likely not even called if there are no events in the timechart span I am looking at.  I want the eval it to return a 1 when there are no events in that span.       This works, but is missing the eval.   index=main sourcetype=iis  cs_host="site1.mysite.com" | timechart span=10s  Max(time_taken) | fillnull value=1 This is what I am using.  It works, except for when no events happen. index=main sourcetype=iis  cs_host="site1.mysite.com" | eval site1_up=if(sc_status=200,1,0) | timechart span=10s  Max(site1_up) This charts a 1 if there was at least one 200 response from site1.mysite.com in the 10s span.  It charts a 0 if there were responses, but none were 200.  If there are no matching events it is probably not even looked at and returns nothing and the chart looks like a 0.  I want a 1 charted if there are no events in that 10s span.        Adding | fillnull value=200 sc_status after the timechart simply shows an extra column of sc_status at 200 in every span (column in the chart).  Putting this before the eval does not work since I believe nothing is done without an event.  It should also only use fillnull (or similar) if no events are in that 10 second span.   I have also tried | append [| makeresults ] without success, but don't completely know how that would work.    Logically this is what I want.  The reasoning for the up/down status is not important since this is simply an example.   For each 10s span in the timechart |eval Site1_up=1 if cs_host=A and at least one sc_status=200 |eval Site1_up=0 if cs_host=A and at no sc_status=200 |eval Site1_up=1 if there are no events matching cs_host=A |eval Site2_up =1 if cs_host=B and at least one cs_method=POST |eval Site2_up =0 if cs_host=B and at no cs_method=POST |eval Site2_up =1 if there are no events matching cs_host=B |eval Site3_up =1 if cs_host=C AND cs_User_Agent=Mozilla and at least one cs_uri_stem=check.asmx |eval Site3_up =0 if cs_host=C AND cs_User_Agent=Mozilla and no cs_uri_stem=check.asmx |eval Site3_up =1 if there are no events matching cs_host=C I am trying to make a chart of the up(1)/down(0) status of various components, some of which are determined by the IIS logs.  Thanks      
I have logs which contain parts like: .. { "profilesCount" : { "120000" : 100 , "120001" : 500 , "110105" : 200 , "totalProfilesCount" : 1057}} .. here the key is accountId and value is the number ... See more...
I have logs which contain parts like: .. { "profilesCount" : { "120000" : 100 , "120001" : 500 , "110105" : 200 , "totalProfilesCount" : 1057}} .. here the key is accountId and value is the number of profiles in it. when I use max_count=0 in rex and extract these values I get: accountId=[12000000, 12000001, 11001005] and pCount=[100, 500, 200] for this example event. Since these accountIds are not mapped to their corresponding pCount when I visualize them I get accountId pCount 12000000 100 500 200 12000001 100 500 200 11001005 100 500 200 how can I map them correctly and show in a table form? This was my search query: search <search_logic> | rex max_match=0 "\"(?<account>\d{8})\" : (?<pCount>\d+)"] | stats values(pCount) by account Thanks in advance
I have a lookup with a field called IP. The field has values that have multiple IPs in them an I would like to sperate them out each into their own field. Some IPs are separated by colons and some ar... See more...
I have a lookup with a field called IP. The field has values that have multiple IPs in them an I would like to sperate them out each into their own field. Some IPs are separated by colons and some are separated by semicolons, and some fields have 3+ IPs. Regardless, I need the IPs in the field beyond the first one to be in their own Field column named IP2 and IP3, etc.  What I have: IP 1.1.1.1,2.2.2.2  or  IP  1.1.1.1;2.2.2.2 I've tried something like the below but the makemv only seems to work for the "," and the seperated IPs still show up in the original IP field.  | makemv delim=";" allowempty=true IP | makemv delim="," allowempty=true IP | mvexpand IP
Hi Team, I have a requirement to build a metrics report with below conditions Similar report for 3 different teams (each should not access the other) Underlying data (within the index) may co... See more...
Hi Team, I have a requirement to build a metrics report with below conditions Similar report for 3 different teams (each should not access the other) Underlying data (within the index) may contain sensitive information. So, only the report should be accessible but not the entire index data Metrics of 3 different teams are present in the same index and sourcetype Should be flexible to include extra information (in future) within the report (for history as well) With these requirements, I thought of below solutions but could not meet all requirements. Embedded reports Runs only for specific scheduled time range. So, no flexibility in selecting different time ranges. Summary Indexing Need to create separate summary indexes (per team) and create a report/dashboard using summary index but adding extra information for history metrics is difficult (that's my perception, correct me if I am wrong!) Creating datasets We can create separate datasets with one single root search but not sure how access controls should be with datasets. Please enlighten! Do we have any other better solution please? OR Do you feel one among the above solution would be better to meet my requirements? Any suggestions are welcome!
Good afternoon, I'm looking for a way to track impossible travel events for users who are logging in to applications using Duo 2fa. Basically if a user gets a duo push from an IP, lets say in Ameri... See more...
Good afternoon, I'm looking for a way to track impossible travel events for users who are logging in to applications using Duo 2fa. Basically if a user gets a duo push from an IP, lets say in America, then another duo event in France within a short time period, this would be an event we want to investigate. Is it possible to do this using splunk queries?
This is very similar to a lot of XML parsing questions, however I have read through ~20 topics and am still unable to get my XML log to parse properly. Here is a sample of my XML file: <?xml vers... See more...
This is very similar to a lot of XML parsing questions, however I have read through ~20 topics and am still unable to get my XML log to parse properly. Here is a sample of my XML file: <?xml version="1.0" encoding="UTF-8"?><AuditMessage xmlns:xsi="XMLSchema-instance" xsi:noNamespaceSchemaLocation="HL7-audit-message-payload_1_3.xsd"><EventIdentification EventActionCode="R" EventDateTime="2022-11-07T04:18:01"></EventIdentification></AuditMessage> <?xml version="1.0" encoding="UTF-8"?><AuditMessage xmlns:xsi="XMLSchema-instance" xsi:noNamespaceSchemaLocation="HL7-audit-message-payload_1_3.xsd"><EventIdentification EventActionCode="E" EventDateTime="2022-11-07T05:18:01"></EventIdentification></AuditMessage> Here are the entire contents of my props.conf file:  [xxx:xxx:audit:xml] MUST_BREAK_AFTER = \</AuditMessage\> KV_MODE = xml LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = true TIMESTAMP_FIELDS = <EventDateTime> TIME_PREFIX = <EventDateTime> TIME_FORMAT = %Y-%m-%dT%H:%M:%S category = Custom disabled = false  I would need your assistance to parse the events. Thank you.
I am configuring a hostname validation TLS certificate with a self-signed certificate in splunk enterprise 9.0 and it seems to me that it cannot trust the CA ERROR X509Verify [TelemetryMetricBuffer... See more...
I am configuring a hostname validation TLS certificate with a self-signed certificate in splunk enterprise 9.0 and it seems to me that it cannot trust the CA ERROR X509Verify [TelemetryMetricBuffer] - Server X509 certificate (CN=,DC=,DC=,DC=) failed validation; error=19, reason="self signed certificate in certificate chain" the configuration is the following: [sslConfig] # turns on TLS certificate requirements sslVerifyServerCert = true # turns on TLS certificate host name validation sslVerifyServerName = true serverCert = <path to your server certificate> Do you know how I can tell splunk which CA I'm using so it can trust the certificate? or how can i configure it?
I have a Splunk query as below which pulls some events.   index="windows_events" TargetFileName="*startup*"     Now from the events I picked the below TargetFileName field value      \Device\... See more...
I have a Splunk query as below which pulls some events.   index="windows_events" TargetFileName="*startup*"     Now from the events I picked the below TargetFileName field value      \Device\HarddiskVolume3\Users\XYZ\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\Send to AbC.lnk     Now I wanted to search specifically for the above field and for that I used the below query which gives me no results.      `get_All_CrowdstrikeEDR` event_simpleName=FileCreateInfo os="Win" TargetFileName="*\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\*"     Now, what I dont understand is when I tried the first query I am able to see some events though I used wild cards before and after startup   Now, when I extended the wild card with actual value why isn't working?   Can't I use backslashes in Splunk searches?