All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone,  I have logs like      2022-11-23 12:47:42.000 id="123" event="some text text2 text3 text4"     I want to trim everything that goes after three consecutive spaces, so ... See more...
Hello everyone,  I have logs like      2022-11-23 12:47:42.000 id="123" event="some text text2 text3 text4"     I want to trim everything that goes after three consecutive spaces, so I want to get raw logs     2022-11-23 12:47:42.000 id="123" event="some text text2 text3"       I did such props.conf [my_sourcetype] ... EXTRACT-event = event="(?<event>.+?)\s{3,}.*" ...   It's working fine, I get event field what I want, but I still get old logs with 3+ spaces. What should I add to props conf to get correct logs?
How can i create notable events alert if any of correlation searches is getting skipped?
Dear All, Can you please suggest whether any index creation (though cli) is required to configure/Onboard new API in to Heavy forwarder .   APP Name :Cisco Umbrella Add-On for Splunk  
I want to monitor my all hosts, esxi's, etc in my vCenter environment. I am working in a distributed environment and I want to send all alarms (for errors) and all data that can help me to ensure tha... See more...
I want to monitor my all hosts, esxi's, etc in my vCenter environment. I am working in a distributed environment and I want to send all alarms (for errors) and all data that can help me to ensure that the health of my vcenter environment is good. Can someone please help and send me the steps in order to do that? It will be helpful to also add tutorials or  documentation for each part. (I don't know for example in what component to enable the HEC token or how to use API to send the alarms from vCenter to my Splunk)
I have six eventtype's that each check Juniper router logs for an Interface bounce (an up/down event). These are working good. Here is an example, the other five are just variations of this (differen... See more...
I have six eventtype's that each check Juniper router logs for an Interface bounce (an up/down event). These are working good. Here is an example, the other five are just variations of this (different routers and interfaces): sourcetype="syslog" host_rdns="lo0.router1.domain.com" AND SNMP AND "xe-0/0/1" NOT "0/3/1.*" I am doing the following search during business hours (08:00 to 20:30/7days a week) as a timechart that spans one day, and displays each eventtype as the "office#" site name with how many flaps per hour occurred during the business hours: sourcetype="syslog" (eventtype="office1" OR eventtype="office2" OR eventtype="office3" OR eventtype="office4" OR eventtype="office5" OR eventtype="office6") NOT UI_CMDLINE | eval date_hourmin=strftime(_time, "%H%M") | eval date_numday = strftime(_time, "%w") | eval date_dow=strftime(_time, "%A") | eval full_datew = _time." ".date_dow| eval mytime=strftime(_time, "%Y-%m-%d, %A") | search (date_hourmin>=0800 date_hourmin<=2030 AND date_numday>=0 date_numday<=6) | timechart span=1d count as "Interface Flap" by eventtype | eval time=strftime(_time, "%m/%d/%Y, %A") | fields - _time | rename office1 as "Home Office", office2 as "Seattle", office3 as "Portland", office4 as "Dallas", office5 as "Chicago", office6 as "New York", time as "Day, Date"   This is working as I want and expect it to, like so: But I cannot figure out how to display all six eventtype's (sites) at all times, including the eventtype's with ZERO counts. I've tried everything I can think of - fillnull, adding fake results (maybe I am doing that wrong?) but I cannot figure out what I am missing/doing wrong. Can someone provide pointers for the best way to address this issue?
Hi Team,   i want store the query results in lookup file  , but outputlookup  command is not updating the csv as per results set .   index = ........ queryresults ............|  outputlookup ... See more...
Hi Team,   i want store the query results in lookup file  , but outputlookup  command is not updating the csv as per results set .   index = ........ queryresults ............|  outputlookup test.csv    is there any changes required  in the query ?     regards, supraja  
Hi, Could you provide me with the search query for one of my index es_splunk ,so that we can find all the null fields, regex case sensitive so it's only catching "null", all lower case, but they ma... See more...
Hi, Could you provide me with the search query for one of my index es_splunk ,so that we can find all the null fields, regex case sensitive so it's only catching "null", all lower case, but they may ALL be that way anyway. Just mentioning for completeness... as well as there could be fields that are not "null" but simply an empty string. Those two cases should be checked if we want 100% coverage.   Thanks.    
Hello Everyone, I need your help please I am using the Location Tracker to follow some alerts. My spl request is : index="imcfault" sourcetype="st_imcfault" | lookup switchs.csv ip AS source... See more...
Hello Everyone, I need your help please I am using the Location Tracker to follow some alerts. My spl request is : index="imcfault" sourcetype="st_imcfault" | lookup switchs.csv ip AS sourceIp | rex field=location "^(?<latitude>.+?), (?<longitude>.+?)$" | table _time latitude longitude faultDesc The lookup switchs.csv returns the following elements : adresse ip label location The final result of the request is :   I want to have the static Icon in two colors : Orange : severity between 0 and 2 red : severity between  3 and 4 Thank you so much
Hello everyone, I have such fields in log: event="some text text2 text3   something     something2", how should I make regex formula to match all until 3 or more spaces? for example, for this even... See more...
Hello everyone, I have such fields in log: event="some text text2 text3   something     something2", how should I make regex formula to match all until 3 or more spaces? for example, for this event it should match "some text text2 text3"
Hi All, Please help me. I am trying to upgrade splunk UF forwarder to recent version i.e 9.0.3. I have stopped splunk service and have used below commands : Downloaded tar file as root user : ... See more...
Hi All, Please help me. I am trying to upgrade splunk UF forwarder to recent version i.e 9.0.3. I have stopped splunk service and have used below commands : Downloaded tar file as root user : wget -O splunkforwarder-9.0.3-dd0128b1f8cd-Linux-x86_64.tgz "https://download.splunk.com/products/universalforwarder/releases/9.0.3/linux/splunkforwarder-9.0.3-dd0128b1f8cd-Linux-x86_64.tgz" Unzipped as splunk user : tar xvfz splunkforwarder-9.0.3-dd0128b1f8cd-Linux-x86_64.tgz -C /opt Running this as splunk user , tried as root user as well : ./splunk start -accept-license But splunk start service is stopping here : Error calling execve(): No such file or directory Error launching  command: Invalid argument I have attached screenshot of what is happening, please help me with resolution. I really appreciate your help. Regards, PNV
I have a UF set to send logs to both Splunk IDX and SIEM, using the TCPOUT settings in outputs.conf, but this is sending via TCP and we want it to use UDP (due to high log rate). Can it be done? The... See more...
I have a UF set to send logs to both Splunk IDX and SIEM, using the TCPOUT settings in outputs.conf, but this is sending via TCP and we want it to use UDP (due to high log rate). Can it be done? There is no option for the tpcout stanza to set a protocol. So it is TCP only. I found there is a syslog-out stanza for the outputs.conf file, which can use UDP or TCP, but that one also says it can't be used on UFs. Am I stuck with TCP, or is there another way? Thanks for any responses, Rod.
I'm consuming data from Splunk REST API endpoints for other purposes. However, it is throwing this error because I used the "lookup" command in the query. Could anyone assist me in resolving this iss... See more...
I'm consuming data from Splunk REST API endpoints for other purposes. However, it is throwing this error because I used the "lookup" command in the query. Could anyone assist me in resolving this issue? If the "lookup" command is not used, the query works properly. Error: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="FATAL">Error in 'lookup' command: Could not construct lookup 'master_sheet.csv, host, as, host, OUTPUT, LOB, Region, Application, Environment'. See search.log for more details.</msg> </messages> </response>   Query: curl -k -u user:pass https://localhost:8089/services/search/jobs --data-urlencode search='search index=foo sourcetype=abc source=*fs.log | rex "(?<Date>.*)\|(?<Mounted>.*)\|(?<Size>.*)\|(?<Used>.*)\|(?<Avail>.*)\|(?<Used_PCT>.*)\|(?<Filesystem>.*)" | eval Used_PCT=replace(Used_PCT,"%","") | search Filesystem IN (/apps, /logs) | stats latest(*) as * by host,Filesystem | where Used_PCT>=80 | sort -Used_PCT | rename Used_PCT as "Use%" | table host,Filesystem,Size,Used,Avail,Use% | lookup master_sheet.csv host as host OUTPUT LOB,Region,Application,Environment | table host,LOB,Region,Application,Environment,Filesystem,Size,Used,Avail,"Use%"' -d id=mysearch_1234567 curl -u user:pass -k https://localhost:8089/services/search/jobs/mysearch_1234567/results --get -d output_mode=csv  
Hello Splunkers I have the following raw events 2023-01-20 18:45:59.000, mod_time="1674240490", job_id="79" , time_submit="2023-01-20 10:04:55", time_eligible="2023-01-20 10:04:56", time_start... See more...
Hello Splunkers I have the following raw events 2023-01-20 18:45:59.000, mod_time="1674240490", job_id="79" , time_submit="2023-01-20 10:04:55", time_eligible="2023-01-20 10:04:56", time_start="2023-01-20 10:45:59", time_end="2023-01-20 10:48:10", state="COMPLETED", exit_code="0", nodes_alloc="2", nodelist="abc[0002,0006]", submit_to_start_time="00:41:04", eligible_to_start_time="00:41:03", start_to_end_time="00:02:11" 2023-01-20 18:45:59.000, mod_time="1674240490", job_id="79" , time_submit="2023-01-20 10:04:55", time_eligible="2023-01-20 10:04:56", time_start="2023-01-20 10:45:59", time_end="2023-01-20 10:48:10", state="COMPLETED", exit_code="0", nodelist="ABC[0002-0004,0006-0008,0073,0081,0085-0086,0089-0090,0094-0095,0097-0098]" submit_to_start_time="00:41:04", eligible_to_start_time="00:41:03", start_to_end_time="00:02:11" How do I extract or parse the highlighted nodelist="ABC[0002-0004,0006-0008,0073,0081,0085-0086,0089-0090,0094-0095,0097-0098]  into a new field called host and the host values for first event would be host= abc0002 and host=abc0006 similarly for second event it should be host= abc0002 host= abc0003 host= abc0004   host=abc0006  host= abc0007 host= abc0008 host=abc0073 host= abc0081   host=abc0095 host= abc0097 host=abc0098 Thanks in Advance
I am in an air-gap environment and would like to take screenshot of the dashboard at a regular interval. I prefer not to have to install additional app or python selenium. Dashboard studio has Acti... See more...
I am in an air-gap environment and would like to take screenshot of the dashboard at a regular interval. I prefer not to have to install additional app or python selenium. Dashboard studio has Action -> "Download PNG"  that can be perform manually. Is there a way to use this feature and put it on a schedule or python script? Or is there a Splunk API that can take the screenshot of a specified link? I am currently using Splunk Enterprise 9.0. 
Hello All, I am running Splunk 9.0.2 on Oracle 8.6. We monitor Cisco devices. These devices require using port 514 to forward their syslogs to splunk. We are running splunk as a non-root user. ... See more...
Hello All, I am running Splunk 9.0.2 on Oracle 8.6. We monitor Cisco devices. These devices require using port 514 to forward their syslogs to splunk. We are running splunk as a non-root user. How can we configure Splunk to allow access to port 514? eholz1
performing the following search: I get this result. I need to parser this information, building a table excel type. The information is in JSON format, so a UPLOAD in SPLUNK. Like this: ... See more...
performing the following search: I get this result. I need to parser this information, building a table excel type. The information is in JSON format, so a UPLOAD in SPLUNK. Like this:        
So from my research it looks like Base Searches increase the performance of the dashboards. A dashboard with several views loads faster if the query of each view is using a pre-existing base search. ... See more...
So from my research it looks like Base Searches increase the performance of the dashboards. A dashboard with several views loads faster if the query of each view is using a pre-existing base search. However my friend is convinced that that's not the case and using Base Searches does the opposite - it prolongs the loading time of the dashboard. Has anyone else had such experience?
Given the below scenario: base search| table service_name,status,count Service_name Status Count serviceA 500_INTERNAL _ERROR 10 serviceA ... See more...
Given the below scenario: base search| table service_name,status,count Service_name Status Count serviceA 500_INTERNAL _ERROR 10 serviceA 404_NOT_FOUND 4 serviceB 404_NOT_FOUND 1 serviceC 500_INTERNAL_ERROR 2 ServiceC 404_NOT_FOUND 5 serviceD 206_PARTIAL_ERROR 1   How can I display the results with group by service_name and the result as below table: Service_name Status Count serviceA 500_INTERNAL _ERROR, 404_NOT_FOUND 14 serviceB 404_NOT_FOUND 1 serviceC 500_INTERNAL_ERROR, 404_NOT_FOUND 7 serviceD 206_PARTIAL_ERROR 1
I have a field which contains http status code. I want to create a single alert query with multiple conditions.  Example: condition1) status code is 500 and greater than 10% alert should be trigg... See more...
I have a field which contains http status code. I want to create a single alert query with multiple conditions.  Example: condition1) status code is 500 and greater than 10% alert should be triggered. Condition 2) status code is 403 and greater than 20% alert should be triggered.  Condition 3) status code is 503 and greater than 20% alert should be triggered.  Also, Is it possible to have different time range for the above condition? like condition 1 and condition 2 should search for last 15 minutes, whereas condition 3 should search for last 30 mins. How do I form the query?
Hi I have an application hosted on a vendor GCP and the logs of the application are stored in the big query of GCP. I need to setup Splunk in my infrastructure to monitor the application hosted o... See more...
Hi I have an application hosted on a vendor GCP and the logs of the application are stored in the big query of GCP. I need to setup Splunk in my infrastructure to monitor the application hosted outside my infra(vendor GCP). Has anyone done something like this? Do you know how can I ingest the logs to Splunk Enterprise?