All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, all! I have one existing field which is CHECKPOINT_ID from my table 1 and another csv file which contains an interpretation of CHECKPOINT_ID.  I want to add a new column of GIVR_CALLFLOW_DEFINE... See more...
Hi, all! I have one existing field which is CHECKPOINT_ID from my table 1 and another csv file which contains an interpretation of CHECKPOINT_ID.  I want to add a new column of GIVR_CALLFLOW_DEFINED_CHKPNT to my table 1 by using lookup!   Here is the table 1 Here's the csv file:    
Hi, all! How could I separate this filed into many other fields which is formed by 4 characters? For example: The original value                   31381012204777027704 3138 1012 ... See more...
Hi, all! How could I separate this filed into many other fields which is formed by 4 characters? For example: The original value                   31381012204777027704 3138 1012 2047 7702 7704         3138111620941002204720387701W019 3138 1116 2094 1002 2047 2038 7701 W019                        
Hi Guys, I have a string say example : abc this string I want to lookup and match the presence in a lookup table  | lookuptable test.csv test.csv has value Number  Value 1 xyz 2 abc ... See more...
Hi Guys, I have a string say example : abc this string I want to lookup and match the presence in a lookup table  | lookuptable test.csv test.csv has value Number  Value 1 xyz 2 abc 3 mnp 4 wgf   I want to check the presence of my search string abc in lookup table and shows me yes or no in result table Like if found in lookuptable should result me as Yes else NO example abc is present in lookuptable so my output should be Search string Presence abc Yes   my search string abc | inputlookup test.csv| table value | rename value AS V1 | eval x="searchstring" | eval y="v1" | eval match=if(match(x,y),1,0) | where match=1 | table Searchstring, Yes   I tried this but didnt get result Kindly help me ! Thanks in advance
Hello! The CSS code below works when I put it directly in my dashboard but not with an external sheet.     <panel depends="$visible$"> <html> <style> div[data-test-panel-id^='relative'], div[... See more...
Hello! The CSS code below works when I put it directly in my dashboard but not with an external sheet.     <panel depends="$visible$"> <html> <style> div[data-test-panel-id^='relative'], div[data-test-panel-id^='realTime'], div[data-test-panel-id^='date'], div[data-test-panel-id^='dateTime'], div[data-test-column-id^='past'], div[data-test-panel-id^='advanced'], div[data-test^='real-time-column'] { display: none; } </style> </html> </panel>     I call the sheet like this:     <form stylesheet="time.css">     What is wrong please?
Hello, everyone! I need help. I configured DB connect app on heavy forwarder and connected database input. I can view DB logs on Heavy forwarder, but I want to forward this data to indexers. ... See more...
Hello, everyone! I need help. I configured DB connect app on heavy forwarder and connected database input. I can view DB logs on Heavy forwarder, but I want to forward this data to indexers. How can I configure it?
We are receiving a log from the host(host=abc) and we have one interesting field named Ip_Address. In this field, we have multiple IP's and event is indexing for each 5 min of an interval like(Ping ... See more...
We are receiving a log from the host(host=abc) and we have one interesting field named Ip_Address. In this field, we have multiple IP's and event is indexing for each 5 min of an interval like(Ping success for Ip_Address=10.10.101.10 OR Ping failed for Ip_Address=10.10.101.10).   FYI, if I am getting events like(1:00pm ping failed and 1:05pm ping success) in this case we are not considering as failed percentage. So, basically if count of failure is more than one time(means Continuously like 1:00pm ping failed and 1:05pm ping failed ) then only it will be considered as failure.   I am using below query to calculate the success and failed percentage of all ip's for an interval of time like 1 month or something but it is not fulfilling my requirement as I want to achieve for all ip's in a single query. It will be more useful if it shows in the dashboard visualization. index=unix sourcetype=ping_log "Ping failed for Ip_Address=10.101.101.14" (earliest="01/04/2022:07:00:00" latest="1/07/2022:18:00:00") OR (earliest="01/10/2022:07:00:00" latest="1/14/2022:18:00:00")OR (earliest="01/17/2022:07:00:00" latest="1/21/2022:18:00:00")OR (earliest="01/31/2022:07:00:00" latest="1/31/2022:18:00:00") | timechart span=600s count | where count=2 | stats count | eval failed_min=count*10 | eval total=failed_min/9900*100,SLA=100-total,Ip_Address="10.101.101.14" | rename SLA as Success_Percent | table Success_Percent Ip_Address  
All,  I built a previous TA and upgrades worked fine in the past. My recent TA build with AOB 4.0 has an issue where the the modular input passwords in password.conf are all erased and set to *****... See more...
All,  I built a previous TA and upgrades worked fine in the past. My recent TA build with AOB 4.0 has an issue where the the modular input passwords in password.conf are all erased and set to ******** (exactly 8). I have tried to debug this every possible way I could. Has anyone seen an issue where passwords were reset with all asterisks? I know from the the logs that this occurs immediately after the upgrade but the logs don't shed light on why the reset occurs.       clear_password {"api_key": "********"}   I am ripping my hair out and I can't seem to figure why this is happening. Once I upgrade and try to upgrade to different build issue no longer occurs.   
It occurs 3-4 hours after starting splunk.  These are the logs found in splunk/var/log/splunk/mongod.log I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current ... See more...
It occurs 3-4 hours after starting splunk.  These are the logs found in splunk/var/log/splunk/mongod.log I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... I REPL [signalProcessingThread] shutting down replication subsystems I REPL [signalProcessingThread] Stopping replication reporter thread I REPL [signalProcessingThread] Stopping replication fetcher thread I REPL [signalProcessingThread] Stopping replication applier thread    
Good afternoon Guru's, I just was put into a position to teach myself how to splunk. I don't have experience with this kind of query type language and it's bringing me to my knees. Here's my query.... See more...
Good afternoon Guru's, I just was put into a position to teach myself how to splunk. I don't have experience with this kind of query type language and it's bringing me to my knees. Here's my query...there is a selected index and everything works perfectly except when I add in a simple division statement...then it says the query is malformed but pretty sure that's not the case at all: I'm trying to get the percentage of events that the response_time is greater than 2 standard deviations:   index="myIndex" | eventstats avg(response_time) as Average_Response_Time stdev(response_time) as Standard_Deviation count(response_time) as Total_Count | eval calc = Average_Response_Time+(2*Standard_Deviation) | eval 2xStd = if(response_time>calc, 1, 0) | eventstats sum(2xStd) as 2times | eval percent = 2times/Total_Count | table response_time Average_Response_Time Standard_Deviation
Hello, I have a docker (no Kubernetes)  what agent should I install? thanks.
Hi, I have configured my windows forwarder to use the custom CA and Server certificate. Below is the configuration and the forwarder is able to connect to indexer fine. File: C:\Program Files\Spl... See more...
Hi, I have configured my windows forwarder to use the custom CA and Server certificate. Below is the configuration and the forwarder is able to connect to indexer fine. File: C:\Program Files\SplunkUniversalForwarder\etc\system\local\outputs.conf [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = XXX:9998 clientCert = C:\Program Files\SplunkUniversalForwarder\etc\auth\mycerts\testCertificate.pem sslPassword = XXX useClientSSLCompression = true sslRootCAPath = C:\Program Files\SplunkUniversalForwarder\etc\auth\mycerts\myCAcertificate.pem [tcpout-server://XXX:9998] But still in the splunkd.log file i am seeing below message, X509Verify [14596 HTTPDispatch] - X509 certificate (O=SplunkUser,CN=SplunkServerDefaultCert) should not be used, as it is issued by Splunk's own default Certificate Authority (CA). This puts your Splunk instance at very high-risk of the MITM attack. Either commercial-CA-signed or self-CA-signed certificates must be used; see: http://docs.splunk.com/Documentation/Splunk/latest/Security/Howtoself-signcertificates   Any idea if I am missing any configs here?
Hello,  How would I confirm that my SPLUNK Configuration established for IPv6 in addition to IPv4 traffic?  Any help would be highly appreciated. Thank you so much
Hello, I have installed the GitHub Add-On for Splunk but I am not currently not seeing any data. I think I have possibly entered the incorrect input fields, but I'm not sure where in GitHub I can... See more...
Hello, I have installed the GitHub Add-On for Splunk but I am not currently not seeing any data. I think I have possibly entered the incorrect input fields, but I'm not sure where in GitHub I can find these fields. Has anyone set this up before and could show me where the input fields are in GitHub?   Thanks, Sophie
I cannot use any of the fields extracted by spath inside an eval.  The result is always null. Input: (formatted for easy reading)   { "meta": { "emit_interval_s": 600 }, "operations":... See more...
I cannot use any of the fields extracted by spath inside an eval.  The result is always null. Input: (formatted for easy reading)   { "meta": { "emit_interval_s": 600 }, "operations": { "kv": { "Get": { "total_count": 4, "percentiles_us": { "75": 17747.0, "95": 18706.0, "98": 18706.0, "99": 18706.0, "100": 18706.0 } }, "GetClusterConfig": { "total_count": 708, "percentiles_us": { "75": 13723.0, "95": 14339.550000000001, "98": 14567.56, "99": 18207.0, "100": 18207.0 } }, "GetMeta": { "total_count": 4, "percentiles_us": { "75": 15776.75, "95": 16761.0, "98": 16761.0, "99": 16761.0, "100": 16761.0 } } } } }   And this is query: | spath input=json_field | eval a=operations.kv.Get.percentiles_us.100 | table json_field operations.kv.Get.percentiles_us.100 a In the output, a is always null but the operations.kv.Get.percentiles_us.100 always displays the correct value. What's happening here?
I created an add-on with Add-on Builder and Modular input using Python code but the Input page is not available - Error 404 and message "Failed to load Inputs Page". inputs.conf file was created ju... See more...
I created an add-on with Add-on Builder and Modular input using Python code but the Input page is not available - Error 404 and message "Failed to load Inputs Page". inputs.conf file was created just in the local folder and looks like this: [khoros_api_python] index = khoros start_by_shell = false python.version = python3 sourcetype = khoros_api_python interval = 86400   What can be wrong? 
Hello! I'm struggling with the time ranges within my query. I have two indexes (anonymized)   index=documentation contains the information which element is mounted in a device.   index= e... See more...
Hello! I'm struggling with the time ranges within my query. I have two indexes (anonymized)   index=documentation contains the information which element is mounted in a device.   index= eor contains events for devices   Now I'm trying to search only for events in the index=eor for devices that contain the element=COB for the last xx time range So I tried to set the time range for the sub search like this: index=eor name IN (*) status IN (*) [ search index= documentation earliest=1 latest=now()      | search element = COB      | table devices ] | table a, b, c, d   But I'm getting no results.   If I set the time picker let's say to a time range, there are the last events in the documentation index, I'm getting results...   Greetings Chris
I'm researching a solution for sending Windows Event logs to a third party service that requires them to be in "Snare over Syslog" format, not the RFC-3164-compliant format that Splunk puts them in w... See more...
I'm researching a solution for sending Windows Event logs to a third party service that requires them to be in "Snare over Syslog" format, not the RFC-3164-compliant format that Splunk puts them in when using syslog format. Has anyone accomplished this? We do have a heavy forwarder in our environment that is set up to receive these logs from our Universal Forwarders, and I know you can use things like SEDCMD to modify data within the logs as they come in, but I haven't found a way to completely reformat them into this new format and send them out.  If anyone has done this or has any tips, I'd appreciate it! This is what the format looks like: Appendix A - Event Output Format - Snare SCWX Windows Agent v5 Documentation - Confluence (atlassian.net)
Hello, I am solving following problem: HEC on HF is used for data receiving. In splunkd.log on Heavy Forwarder I found these error:   ERROR HttpInputDataHandler - Failed processing http input... See more...
Hello, I am solving following problem: HEC on HF is used for data receiving. In splunkd.log on Heavy Forwarder I found these error:   ERROR HttpInputDataHandler - Failed processing http input, token name=linux_rh, channel=n/a, source_IP=10.177.155.14, reply=9, events_processed=18, http_input_body_size=8405, parsing_err="Server is busy"   There was 7 messages of this kind during 10 minute interval. I found that "reply=9" means "server is busy" - this is a message for log source "stop sending data", because that HF is overloaded (log source really stopped sending data). At the same time parsing, aggregation, typing, httpinput and splunktcpin queues had 100% fill ratio, indexing queue has 0% fill ratio. At the same time, VMWare host on which HF is running, was probably overloaded - CPU frequency on this host in usually about 1GHz, but grew up to 4GHz shortly for this time (it was not caused by Splunk HF probably). At the same time, there is no ERROR messages in splunkd.log on IDX cluster, which is receiving data from concerned HF. Based on this information, I came to the following conclusion: Because the indexqueue on the HF was not full and there were no ERRORs on the IDX cluster, there was no problem on the IDX cluster or on the network between the HF and the IDX cluster. Due to VMWare host overload, the HF did not have sufficient resources to process messages, so the parsing, aggregation, and typing queues became full. As a result, the following occurred: to populate the httpinput and splunktcpin queues to generate ERROR error HttpInputDataHandler - Failed processing http input stop receiving data from the log source As soon as the VMWare host overload ended (after cca 10 minutes), data reception was resumed, no data was lost. Could you please review my conclusion and tell, if I am right? Or there is something more to investigate? And what to do to avoid this problem in future? Re-configure queue setting (set higher max_size_kb)? Or add some power to VMWare host? Or something else? Thank you very much in advance for any input. Best regards Lukas Mecir 
For instance, I want to filter for HTTP from 192.168.0.100. The closest I can get:   remoteAddress = 192.168.0.100 protocol = tcp   Is there no way to include the port? https://docs.splunk.... See more...
For instance, I want to filter for HTTP from 192.168.0.100. The closest I can get:   remoteAddress = 192.168.0.100 protocol = tcp   Is there no way to include the port? https://docs.splunk.com/Documentation/Splunk/8.2.4/Admin/Inputsconf#Windows_Host_Monitoring
Hi all, I'm trying to do a field extraction of database name (let's call the field "DBname") from logs that come in 2 formats: Jan 19 15:58:06 192.168.1.2 Jan 19 15:58:06 Message forwarded from D... See more...
Hi all, I'm trying to do a field extraction of database name (let's call the field "DBname") from logs that come in 2 formats: Jan 19 15:58:06 192.168.1.2 Jan 19 15:58:06 Message forwarded from Database1: Oracle Audit blablablabla Jan 20 06:36:17 192.168.1.3 Jan 20 06:36:17 Database2 journal: Oracle Audit blablablablabla Jan 21 06:36:17 192.168.1.4 Jan 21 06:36:17 Database_10 journal: Oracle Audit blablablablabla Jan 22 15:58:06 192.168.1.5 Jan 22 15:58:06 Message forwarded from Database4: Oracle Audit blablablabla Jan 23 15:58:06 192.168.1.6 Jan 23 15:58:06 Message forwarded from prmds1: Oracle Audit blablablabla Jan 24 15:58:06 192.168.1.7 Jan 24 15:58:06 Message forwarded from Database_15: Oracle Audit blablablabla Jan 26 15:58:06 192.168.1.9 Jan 26 15:58:06 Message forwarded from prmds2: Oracle Audit blablablabla Jan 27 15:58:06 192.168.1.8 Jan 27 15:58:06 fafa32 journal: Oracle Audit blablablablabla So, the "DBname" field value comes after "Message forwarded from" or before "journal". Splunk fails with the regex and unfortunately so do I. I found it's an issue that the events are so similarly formatted in this case My question is if I am missing something with the regex or I should approach it in a completely different manner.  Thank you for the help!