All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm trying to use the Missile Map visualization, however, every time Custom Cluster Map Visualization in dashboard with Missile Map visualization , the label shows on error.     How can I ... See more...
I'm trying to use the Missile Map visualization, however, every time Custom Cluster Map Visualization in dashboard with Missile Map visualization , the label shows on error.     How can I solve this error? Thanks!
Splunk can not load old data only load current data. Though it shows event count. Before that I have moved some splunk cold db folder  in several times to free up space . and it worked fine. I dont u... See more...
Splunk can not load old data only load current data. Though it shows event count. Before that I have moved some splunk cold db folder  in several times to free up space . and it worked fine. I dont understand what happend now. Is there any way to recover data without splunk search? Installed in windows.
I have a index=weblogs where I filter results and then REX extract an IP address to a new field called RemoteIP. I want to then search our firewall logs on index=firewall for that newly extracted fi... See more...
I have a index=weblogs where I filter results and then REX extract an IP address to a new field called RemoteIP. I want to then search our firewall logs on index=firewall for that newly extracted field RemoteIP. I have been playing around with sub searches and joins but not getting far.   
Hi all,  I want to know how splunk extracts fields from TA_windows inputs when mode=multikv  The _raw event does not seem to have any sort of field indicator (as compared to events from TA_nix whic... See more...
Hi all,  I want to know how splunk extracts fields from TA_windows inputs when mode=multikv  The _raw event does not seem to have any sort of field indicator (as compared to events from TA_nix which has headers)  As an example:  Splunk_TA_windows/local/inputs.conf     [perfmon://Network-Bytes] disabled = false counters = Bytes Total/sec; Bytes Received/sec; Bytes Sent/sec; interval = 60 mode = multikv index = perfmon useEnglishOnly = true object = Network Interface sourcetype = PerfmonMk:Network     gives _raw events as seen indexed in Splunk:      vmxnet3_Ethernet_Adapter 19069.926362422757 11044.290764991998 8025.635597430761 vmxnet3_Ethernet_Adapter 26173.569591676503 15701.614528029395 10471.95506364711 vmxnet3_Ethernet_Adapter 28654.246470518276 17482.977608482255 11171.268862036022     From this output, splunk magically extracts fields like:      Bytes_Received/sec Bytes_Sent/sec Bytes_Total/sec instance category collection     I checked the TA_windows configs and ran btool, but could not trace configs other than some standard PerfmonMk:<object> stanzas in Splunk_TA_windows/default/props.conf which contain only FIELDALIAS settings What am I missing? How does splunk know which field is which?  How does it even get values for category & collection when those values are not even present in the _raw?    Further comparison, TA_nix add-on does this in a much more legible manner (which can be easily understood and played around with) like:  Name rxPackets_PS txPackets_PS rxKB_PS txKB_PS eth0 1024.00 1972.50 1415.04 674.94 ​     Additional:  I want to convert the PerfmonMk events to metrics, has anyone attempted that?     
I have a Splunk Dashboard. It has a text field named "Error msg" and a Time-Picker. (Image - "Dashboard items").  If the text field "Error msg" is empty, I am able to display all the logs within the... See more...
I have a Splunk Dashboard. It has a text field named "Error msg" and a Time-Picker. (Image - "Dashboard items").  If the text field "Error msg" is empty, I am able to display all the logs within the given time frame.  Query :    index=AppIndex cf_app_name=AppName msg!="*Hikari*" taskExecutor- | fields _time msg | sort -_time | | table _time msg   Now, If I enter a log message in the text field "Error msg", my goal is, for the given time frame, 1. Search all the occurrences of this "Log message". 2. Get the latest occurrence.  3. In the output table, print the logs right before the last occurrence of the msg.  In this way, user can trace the error msg and look at the logs (right before the error in the text field) to find what caused the error to happen.  Any suggestions on how this can be done via a query?
When I try to create a Shared Services server (for a development environment), it prompts me for the password for the "user" account. I have tried a variety of things, using the default password that... See more...
When I try to create a Shared Services server (for a development environment), it prompts me for the password for the "user" account. I have tried a variety of things, using the default password that comes with SOAR, adding a user called "user" and trying that password. None of it works, and after 5 attempts it ruins the installation and I have to scuttle the VM and start over. Anyone run into this issue?
Hi  I cannot find the documentation that explains the various statuses in the scheduler.log For example here are a few>>>  continued delegated_remote delegated_remote_completion delegated_remote_e... See more...
Hi  I cannot find the documentation that explains the various statuses in the scheduler.log For example here are a few>>>  continued delegated_remote delegated_remote_completion delegated_remote_error skipped success   Does anyone have a reference?   Thank you!
I'm new and a novice to Splunk although i have installed, setup and played with searches in Splunk in a lab. My question is if I have servers that are sending logs all from different “environments” ... See more...
I'm new and a novice to Splunk although i have installed, setup and played with searches in Splunk in a lab. My question is if I have servers that are sending logs all from different “environments” (prod, test, dev) what is the best way to organize the logs coming in by environment. I see I can use tags and/or indexes, but which way would make more sense.
I want to search like: index=whatever "term_1" AND (at least one event in the source of the found record contains term_2) Suppose source1 is: /var/log/source1.log event 1 event 2 term_2 event 3... See more...
I want to search like: index=whatever "term_1" AND (at least one event in the source of the found record contains term_2) Suppose source1 is: /var/log/source1.log event 1 event 2 term_2 event 3 event 4 term_1 source2 is: /var/log/source2.log event 1 event 2 event 3 term_1 When searching for term_1, I want to see the results only from source1. Because source1 also has an event having term_2 in it.
This is the basic case: I have an event 2021-12-28T06:24:17.567|SEARCHING|{"field1":"value1","field2":5,"field3":"la la la"} My search  index="redact" SEARCHING | spath path="field3" Splunk is s... See more...
This is the basic case: I have an event 2021-12-28T06:24:17.567|SEARCHING|{"field1":"value1","field2":5,"field3":"la la la"} My search  index="redact" SEARCHING | spath path="field3" Splunk is separating the values, but field3 column is empty for all events.   Can anyone please assist? 
Hello, I've got a search query where I'm looking for unexpected ssh connections to my instances, but I've got one server where my IP address dynamically changes and I want to exclude the IP address ... See more...
Hello, I've got a search query where I'm looking for unexpected ssh connections to my instances, but I've got one server where my IP address dynamically changes and I want to exclude the IP address of that host because I know there will be expected ssh connections from that IP address. I'm running a sub search to look at aws description logs, grabbing the IP of the box based on it's name and returning the IP address in hopes I can use it in my main search. So far it's not working how I expect and I'm not sure why. I would expect not to see entries for hostnameA with usernameA that's coming from a source IP that I'm getting from my subsearch, but my results include those entries. Here's my search so far:   index=X sourcetype=linux_secure eventtype=sshd_authentication action=success | eval exclude_host_ip=[ search index=X sourcetype=aws:description source=*:ec2_instances (tags.host=* OR tags.Name=*) earliest=-24h latest=now | eval hostName=coalesce('tags.host', 'tags.Name') | search hostName=dynamic_ip_hostname | sort - _time | dedup private_ip_address | eval ip="\"".private_ip_address."\"" | return $ip] | search NOT (host=hostnameA AND user=usernameA AND user_src_ip=exclude_host_ip) | table _time, user, host, user_src_ip | sort - _time | dedup _time user host user_src_ip | rename _time as Time, user as "Username", host as "Host", user_src_ip as "Source IP" | convert timeformat="%m-%d-%Y %H:%M:%S" ctime(Time)  
I was hoping if someone can help me. We are looking into deploying Sysmon and the Universal forwarder remotely in very specific circumstances ( suspicious activity on a host  or by a user etc etc ) .... See more...
I was hoping if someone can help me. We are looking into deploying Sysmon and the Universal forwarder remotely in very specific circumstances ( suspicious activity on a host  or by a user etc etc ) . I am struggling on being able to get the universal forwarder setup remotely. Essentially i just need the universal forwarder to forward the sysmon event logs ( Microsoft-Windows-Sysmon/Operational ) but i need to be able to do this remotely via command line or script.   I came across a Splunk article about setting up the forwarder with a static config which seemed good but looking into the config options it doesnt seem to allow you to specify what logs to collect - it gives you option of the usual Security , System , Application etc but doesnt appear to support anything else unless im mistaken?  Else anyone know if its possible to include a config file/parameters within the installer?   
I am trying to use DBSCAN fit and getting  error "DBScan Error in fit command: Memory limit exceeded" . I did increased Memory limit from 1000 mb to 3000 mb and still getting the error at 50k records... See more...
I am trying to use DBSCAN fit and getting  error "DBScan Error in fit command: Memory limit exceeded" . I did increased Memory limit from 1000 mb to 3000 mb and still getting the error at 50k records. I need to process of 500k records. Is there any work around for this situation?
I need to extract the contents of the message field into a json log, but the first strings must be ignored until 'stdout F', I can only get the one in front, the second timestamp Any ideas how to do... See more...
I need to extract the contents of the message field into a json log, but the first strings must be ignored until 'stdout F', I can only get the one in front, the second timestamp Any ideas how to do this? Examples:   { app: app01 message: 2022-01-06T17:57:25.799919642Z stdout F [2022-01-06 09:00:00,799] INFO - INFO region: southamerica-east1 } { app: app02 message: 2022-01-06T17:57:25.799919642Z stdout F [2022-01-06 10:20:25,799] ERROR - APIAuthenticationHandler API authentication failure region: southamerica-east1 } { app: app03 message: 2022-01-06T17:57:25.799919642Z stdout F [2022-01-06 12:57:00,799] WARN - failure due to Invalid Credentials region: southamerica-east1 } { app: app04 message: 2022-01-06T17:57:25.799919642Z stdout F [2022-01-06 14:57:25,799] WARN - APIAuthenticationHandler API authentication region: southamerica-east1 }  
If we move the folder "Splunlpkg.backup" from the indexer server to another data mount (i.e., from /dev/sda3/  to /dev/other), what could be the consequences? Will there be any data loss? 
Hello splunk community! For some context, I started by adding some files into a directory first, then i configured the monitor processor of the splunk universal forwarder in inputs.conf  to monitor ... See more...
Hello splunk community! For some context, I started by adding some files into a directory first, then i configured the monitor processor of the splunk universal forwarder in inputs.conf  to monitor the directory. However, (after restarting splunk universal forwarder) when i searched for the index in splunk enterprise, there were no search results. Afterwards, i added some new files to the directory and suddenly the logs appeared on the search and what confuses me is that it does contain logs from the existing files in the directory. is anyone able to explain why the logs of existing files didnt appear in the first place? and only appeared after i added new files to the directory? Thank you in advance!
I have a few apps that contain repots that I need to copy to ES please. Thank u
Hello, We are sizing a Splunk solution for internal usage. Referring to the documentation, it is said that Mid size Indexer will require 48vCPU and 64Gb RAM. However, I wanted to understand how much... See more...
Hello, We are sizing a Splunk solution for internal usage. Referring to the documentation, it is said that Mid size Indexer will require 48vCPU and 64Gb RAM. However, I wanted to understand how much EPS will this kind of indexer handle. Please advise
How do I plot a per-operation success rate over a rolling 24 hour period?  As a point in time query producing a chart, I do index=kubernetes source=*proxy* api.foo.com OR info OR commitLatest ... See more...
How do I plot a per-operation success rate over a rolling 24 hour period?  As a point in time query producing a chart, I do index=kubernetes source=*proxy* api.foo.com OR info OR commitLatest | rex field=_raw ".*\"(POST|GET) \"(?<host>[^\"]+)\" \"(?<path>[^\"\?]+)[\?]?\" [^\"]+\" (?<raw_status>\d+) (?<details>[^\ ]+) " | eval status=case(details="downstream_remote_disconnect","client disconnect",match(details, "upstream_reset_after_response_started"),"streaming error",true(),raw_status) | eval operation=case(match(path,".*contents"),"put-chunked-file",match(path,".*info"), "get-file-info-internal", match(path,".*commitlatest"), "commit-latest-internal", true(), "get-chunked-file") | eval failure=if(match(status,"^(client disconnect|streaming error|[0-9]|400|50[0-9])$"),1,0) | stats count by operation, failure | eventstats sum(count) as total by operation | eval percent=100 * count/total | stats list(*) by operation | table operation, list(failure), list(percent), list(count)
<query> index=index_test | dedup empID | eval tot = case (match('call.code' , "1") OR match('call.code' , "2") OR match('call.code' , "3") OR match('call.code' , "4") OR match('call.code' , "5") ,... See more...
<query> index=index_test | dedup empID | eval tot = case (match('call.code' , "1") OR match('call.code' , "2") OR match('call.code' , "3") OR match('call.code' , "4") OR match('call.code' , "5") , "Success", match('call.code' , "6"),"Failure") | stats count(eval(tot="Success")) as "TotalSuccess" count(eval(tot="Failure")) as "TotalFailure" | rename TotalSuccess as SUCCESS, TotalFailure as FAILURE </query> In the Drilldown Part:- <drilldown> <set token="abc">$click.value$</set> <set token="xyz">case ($click.name2$="FAILURE", "6",  $click.name2$="SUCCESS", "1,2,3,4,5" ) <link target="_blank"> search?q=index=index_test call.operation IN "$abc$" call.code IN "click.name2" | dedup empID | eval tot = case (match('call.code' , "1") OR match('call.code' , "2") OR match('call.code' , "3") OR match('call.code' , "4") OR match('call.code' , "5") , "Success", match('call.code' , "6"),"Failure") </link> </drilldown> Here in drilldown, I want to pass multiple values in  $click.name2$="SUCCESS", "1,2,3,4,5". But it is not taking the values.