All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a dashboard that uses joins with subsearches and when it runs for me and others it shows no errors. But for a few people when they run it, it runs/completes but has the red warning error with ... See more...
I have a dashboard that uses joins with subsearches and when it runs for me and others it shows no errors. But for a few people when they run it, it runs/completes but has the red warning error with a message like [subsearch] search process did not exit cleanly exit code=255. When I inspect the job it says this  "[subsearch]: No matching fields exist."  I'm confused why this warning isn't showing up from everyone. Is there a way to ensure this message doesn't show up? 
I've got two searches I'm trying to join into one.   | localop | ldapsearch domain=my_domain search="(&(objectCategory=Computer)(userAccountControl:1.2.840.113556.1.4.803:=xxxx))" | table cn, dNS... See more...
I've got two searches I'm trying to join into one.   | localop | ldapsearch domain=my_domain search="(&(objectCategory=Computer)(userAccountControl:1.2.840.113556.1.4.803:=xxxx))" | table cn, dNSHostName   And   | makeresults | eval fqdn="www.usatoday.com" | lookup dnslookup clienthost AS fqdn OUTPUT clientip as ip   What I would like is a table that has hostname, FQDN, and IP Address.  I've tried various subsearch methods to join them, but I must have something off since I either get an error or nothing.  Any thoughts? TIA, Joe
With whish query will get notable events worked by which owner with status of incident in enterprise security.
how to create a  mitre att&ck tactics dashboard for Splunk Enterprise Security cloud solution with using any app. Used Below query how ever its not give the historic data  | sseanalytics | table n... See more...
how to create a  mitre att&ck tactics dashboard for Splunk Enterprise Security cloud solution with using any app. Used Below query how ever its not give the historic data  | sseanalytics | table name usecase hasSearch includeSSE datasource displayapp app journey category domain icon description dashboard mitre killchain alertvolume bookmark_status | search (usecase="*") (category="*") * | stats count by mitre | search mitre!="None" | sort + count
We have a long standing batch input that has stopped working. No matter how i change the input including pointing the input directly at a singe file, nothing changes. Any way to get more information?... See more...
We have a long standing batch input that has stopped working. No matter how i change the input including pointing the input directly at a singe file, nothing changes. Any way to get more information? Right now I have no information about why the files are not being ingested anymore. Can I change a logging config to get more info?  I have cleared the fishbucket with no changes. We are using the 7.3.3 UF. I do notice more latency when i ls the file. Could the shared file system be too slow?  I am baffled so any ideas are more than welcome.  Thanks!
We are setting up the Phantom App Microsoft Exchange On-Premise EWS version: 3.0.3 to talk to our On Prem Exchange EWS instance and we are getting the below error when we do a test connection: "HTTP... See more...
We are setting up the Phantom App Microsoft Exchange On-Premise EWS version: 3.0.3 to talk to our On Prem Exchange EWS instance and we are getting the below error when we do a test connection: "HTTP Code: 401. Reason: Unauthorized. Details: . Toggling the impersonation configuration on the asset might help, or login user does not have privileges to the mailbox."   I toggled the impersonation settings but the error is the same is there any specific permissions that need to be given to the Exchange account as so far the Exchange admin team confirmed the ID has rights to impersonate?
I am trying the following query. However, activityId is not being passed to the second query and I am not having any results. index=kubernetes lineOfBusiness=ifm component=chub useCase=C5 responsePa... See more...
I am trying the following query. However, activityId is not being passed to the second query and I am not having any results. index=kubernetes lineOfBusiness=ifm component=chub useCase=C5 responsePayload | rex field=_raw "imsiActivationDate\"\:\"(?<imsiActivationDate>[^\"]*)" | rex field=_raw "simChangeDate\"\:\"(?<simChangeDate>[^\"]*)" | rex field=_raw "activity-id=(?<activityId>[^||]*)" | table activityId | map search="index=kubernetes lineOfBusiness=ifm component=ifm activity-id=*$activityId$*" | rex field=_raw "msisdn":"=(?<msisdn>[^=]*)" | dedup activityId, msisdn | table activityId msisdn
2 Question on Admin Side : Question 1 : How many hosts are on each version of the Splunk Universal Forwarder ? index="_internal" source="*metrics.log*" group=tcpin_connections |dedup hostname |st... See more...
2 Question on Admin Side : Question 1 : How many hosts are on each version of the Splunk Universal Forwarder ? index="_internal" source="*metrics.log*" group=tcpin_connections |dedup hostname |stats count(hostname) as TotalCount by hostname , version,os  |table hostname ,version,os TotalCount this query returning results but as confirmation need to be confirm it correct or not ?? Question 2 :  Which Splunk version each of our Splunk servers are on ? Tried rest query but it not working as need is i need to list down all the splunk instance means (SHC,IC,Deployer, Deployment server n all) they dont want to open Monitor console , they want to be have Custom dashboard for it    
I have a search roughly equivalent to this: ... | eval TimeHour=strftime(_time,"%Y-%m-%d %H:00:00") | eval TimeDay=strftime(_time,"%Y-%m-%d") | eval TimeWeek=strftime(_time,"%Y-%V") | stats dc(trans... See more...
I have a search roughly equivalent to this: ... | eval TimeHour=strftime(_time,"%Y-%m-%d %H:00:00") | eval TimeDay=strftime(_time,"%Y-%m-%d") | eval TimeWeek=strftime(_time,"%Y-%V") | stats dc(transactionId) as "Users" by TimeHour, TimeDay, TimeWeek I want to create a line chart that allows the user to choose to group by hour, day, or week.  What's the best way to achieve that?  Maybe a string "date" isn't the right way to go.  In any event, can I change which field from a search is the X axis rather than defaulting to something random?  I'm frustrated with the lack of flexibility in visualizations. Thanks!
When Boomi Process trying to call Splunk HTTP URL to feed the date receiving the following error ,  Code 401: Unauthorized for Splunk HTTP Using proper Auth Key that is being used in multiple oth... See more...
When Boomi Process trying to call Splunk HTTP URL to feed the date receiving the following error ,  Code 401: Unauthorized for Splunk HTTP Using proper Auth Key that is being used in multiple other deployed prcesses. 
Hello guys  I am trying to create a timechart in my dashboard where I can show the percentage of people that enter my website and made a purchase  this calculation is equal to = amount_purchase/total... See more...
Hello guys  I am trying to create a timechart in my dashboard where I can show the percentage of people that enter my website and made a purchase  this calculation is equal to = amount_purchase/total_amount and my code looks like this:   | multisearch [| search index="A" | search IN_PEOPLE="gate_10"] [|search index="CATALOGUE" | search ACC="pur_ok"] | streamstats c(IN_PEOPLE) as IN, c(ACC) as OUT | eval rate=OUT/IN   now that rate was been calculated I want a timechart that can show me the value of rate for the last 10 days I was trying with the following code   | multisearch [| search index="A" | search IN_PEOPLE="gate_10"] [|search index="CATALOGUE" | search ACC="pur_ok"] | streamstats c(IN_PEOPLE) as IN, c(ACC) as OUT | eval rate=OUT/IN | timechart span=1d max(rate) as rate   BUT is not showing what I am looking for due to that it will give the max value of rate recorded but what I want is the overall rate of yesterday and the day before and so on.. to give you guys an example the rate for yesterday (abril 19 from 00:00 to 24:00) was 0.78 but my code is giving me 1 because I guess at some point it was a 1 the max value of rate thank you so much to anyone that can help me out, I trullly from the bottom of my heart appreaciate your help     
Team,  Good day! I will need to install Cisco ISE in Splunk Phantom. I have the new instance of Splunk Phantom installed, which is great!, but now I need to install Cisco ISE. Does anyone have the ... See more...
Team,  Good day! I will need to install Cisco ISE in Splunk Phantom. I have the new instance of Splunk Phantom installed, which is great!, but now I need to install Cisco ISE. Does anyone have the steps to proceed with the installation? Thanks a lot. G,
Hi. I'm very much a novice when it comes to dashboards. I have to create a dashboard that monitors our alerts. I have created this report to use to start with. I have to add a field where a ticket nu... See more...
Hi. I'm very much a novice when it comes to dashboards. I have to create a dashboard that monitors our alerts. I have created this report to use to start with. I have to add a field where a ticket number can be entered for each tripped alert. Also, I have to add a drop down for each alert for the ticket status (i.e. New, WIP, Closed). Here is my search string for the dashboard panel that shows our alerts: index=_audit action=alert_fired | eval _time=trigger_time | convert timeformat="%+" ctime(_time) as trigger_time | table trigger_time ss_name severity alert_actions sid | eval severity = case(severity==1,"Informational",severity==2,"Low",severity==3,"Medium",severity==4,"High",severity==5,"Critical") | rename trigger_time as "Alert Time:", ss_name as "Alert Name:", severity as "Alert Urgency:", alert_actions as "Alert Actions:", sid as "SID:" I'm open to suggestions for a better way to do this. Please keep in mind that we cannot install any Splunk apps as we are in a multi-tenancy environment and do not own the Enterprise Splunk instance. Any assistance is greatly appreciated!
Hello, I am trying to use sub search to extract fields from my JSON logs. I tried with spath and also with Rex  commands, I ended up with the below error:   Error in 'rex' command: Invalid a... See more...
Hello, I am trying to use sub search to extract fields from my JSON logs. I tried with spath and also with Rex  commands, I ended up with the below error:   Error in 'rex' command: Invalid argument: '('   Here is the sample log form on of the events:   {"dimension": {"id": 637545304780000000, "name": "2021-04-20T15:47:58Z"}, "end": "2021-04-20T15:48:29.5067304Z", "host_Ip": "18.216.23.71", "indicators": {"First Contentful Paint": "None", "First Paint": "None", "Jitter [ms]": "None", "Max jitter": "None", "Max packet lost": "None", "Max round trip time": "None", "Min Packet loss": "None", "Min jitter": "None", "Min round trip time": "None", "TTInteractive": "None", "appium_errors_#": "None", "appium_test_time_ms": "None", "bytes received": "None", "bytes sent": "None", "custom": "None", "email response time": "None", "email round trip time": "None", "ipfs availabilty count": "None", "ipfs download": "None", "ipfs ping": "None", "ipfs upload": "None", "ipfs upload file size": "None", "lighthouse2": "None", "loader": "None", "rakesh_testing_indicator": "None"}, "node_id": 11, "node_name": "New York, US - Level3", "start": "2021-04-20T15:33:29.5067304Z", "step": "1", "step_name": "1 Login username", "synthetic_metrics": {"# Connection Failures": "None", "# Connections": "8.0", "# Content Load Errors": "0.0", "# Css": "1.0", "# DNS Failures": "None", "# Flash": "0.0", "# Font": "1.0", "# Hosts": "7.0", "# Html": "2.0", "# Image": "2.0", "# Items (Total)": "17.0", "# JS Errors per Page": "0.0", "# Media": "0.0", "# Other": "0.0", "# Purged Runs": "None", "# Redirect": "2.0", "# Response Failures": "None", "# Runs": "1.0", "# SSL Failures": "None", "# Script": "10.0", "# Test Errors": "None", "# Tests with JS Errors": "None", "# Timeout Failures": "None", "# Xml": "0.0", "# Zones": "1.0", "% Adjusted Availability": "100.0", "% Availability": "100.0", "% Content Availability": "100.0", "% Downtime": "0.0", "% Frustrated": "0.0", "% Not Frustrated": "100.0", "% Ping Packet Loss": "None", "% Satisfied": "100.0", "% Self Bottleneck": "None", "% Step Content Availability": "None", "% Third Party Bottleneck": "None", "% Tolerating": "0.0", "Apdex": "1.0", "Client Time (ms)": "64.0", "Connect (ms)": "17.0", "Content Load (ms)": "3952.0", "Css (ms)": "36.0", "Css Bytes": "1086.0", "DNS (ms)": "13.0", "DOM Load (ms)": "1168.0", "Document Complete (ms)": "1322.0", "Downloaded Bytes": "2744.0", "File Size": "476.0", "First Contentful Paint": "1751.0", "First Paint": "1751.0", "Flash (ms)": "None", "Flash Bytes": "None", "Font (ms)": "17.0", "Font Bytes": "16267.0", "Frames Per Second": "23.1000003815", "Html (ms)": "502.0", "Html Bytes": "4396.0", "Image (ms)": "93.0", "Image Bytes": "8590.0", "Load (ms)": "None", "Media (ms)": "None", "Media Bytes": "None", "Other (ms)": "None", "Other Bytes": "None", "Page Speed Score": "None", "Ping Round Trip (ms)": "None", "Redirect (ms)": "359.0", "Render Start (ms)": "1796.0", "Response (ms)": "447.0", "SSL (ms)": "22.0", "Script (ms)": "3221.0", "Script Bytes": "1152629.0", "Self Downloaded Bytes": "None", "Send (ms)": "1.0", "Server Response (ms)": "434.0", "Signal Quality": "None", "Signal Strength (dBm)": "None", "Speed Index": "2021.0", "Test Time (ms)": "4309.0", "Throughput": "6.96446700508", "Time To First Byte (ms)": "447.0", "Time To Interactive": "4076.0", "Time to Title (ms)": "None", "Total Downloaded Bytes": "1182968.0", "Visually Complete (ms)": "1751.0", "Wait (ms)": "394.0", "Webpage Response (ms)": "4309.0", "Webpage Throughput": "274.53423068", "Wire Time (ms)": "1258.0", "Xml (ms)": "None", "Xml Bytes": "None"}, "test_id": 1215995, "test_name": "One Login Google authenticator"}   Can someone help me with how to use sub search with Spath or rex commands? Basically, both the primary and sub queries will be using the Spath or rex command to extract few values from the above JSON. A basic example will be good enough for me to try.  
Hi, I am working on a requirement where I have write an alert based on the failure rate percentage of a service. Let's say I have 10 web services and I want to trigger the alert based on the traffic,... See more...
Hi, I am working on a requirement where I have write an alert based on the failure rate percentage of a service. Let's say I have 10 web services and I want to trigger the alert based on the traffic, successful and failed requests.    I have written a query but it doesn't seem to giving me the correct results -   index=myapp_prod sourcetype=myapp_service_log "System Exception" NOT "responseStatus=SUCCESS" NOT "ResponseStatusCode=404" NOT "Business Exception" | stats count as Failures by serviceName | appendcols [ search index=myapp_prod sourcetype=myapp_service_log "responseStatus=SUCCESS" | stats count as Success by serviceName | fillnull] | eval Total = Success + Failures <!-- when failureRatePercentage > 10 -->   What i want is table like this. With my query, it's not coming right. I am getting multiple rows with same serviceName and empty column serviceName TotalRequest Success Failed FailureRatePercentage service1 1000 800 200 20 service2 2000 1500 500 25   Can anyone advice how can i achieve this? It's better to set the alert based on failure percentage rather than the absolute value
I'm using splunk:8.1-debian image from dockerhub to create Splunk Enterprise application. Once the container is running, I installed Splunk App for Jenkins using the 'install app from file' option fr... See more...
I'm using splunk:8.1-debian image from dockerhub to create Splunk Enterprise application. Once the container is running, I installed Splunk App for Jenkins using the 'install app from file' option from Splunk UI. Upon navigating to some dashboards (eg health), the dashboards are not loaded and browser console has javascript error.  Splunk automatically detects the browser's default language and return en-GB for locale. When I explicitly change web UI locale from en-GB to en-US, all the dashboards are loaded properly. Overview dashboard (en-GB) Health dashboard (en-GB) Overview dashboard (en-US) Health dashboard (en-US)
Hello I'm currently working on a dashboard that has a  'Maps+'  panel. I've provided a table with 'latitude', 'longitude', "_time"  and "path" (path has only one value, representing a moving car) b... See more...
Hello I'm currently working on a dashboard that has a  'Maps+'  panel. I've provided a table with 'latitude', 'longitude', "_time"  and "path" (path has only one value, representing a moving car) but when I'm using the 'Maps+' playback button or slider they're not activating anything, the slider stays put and the date and time are fixed to a point in time relatively close to the earliest recorded event moreover, the 'antPath' option suggests the path direction the app is painting is opposite to the direction of the actual movement of the car   any ideas? thanks in advnace.
splunk@:~/bin $ systemctl status splunk ● splunkd.service - Splunk Universal Forwarder Loaded: loaded (/etc/systemd/system/splunkd.service; enabled; vendor preset: disabled) Active: failed (Result... See more...
splunk@:~/bin $ systemctl status splunk ● splunkd.service - Splunk Universal Forwarder Loaded: loaded (/etc/systemd/system/splunkd.service; enabled; vendor preset: disabled) Active: failed (Result: protocol) since Tue 2021-04-20 11:49:25 UTC; 2h 41min ago Process: 1869 ExecStart=/opt/splunkforwarder/bin/splunk start --accept-license --answer-yes --no-prompt (code=exited, status=0/SUCCESS) Main PID: 2050 (code=exited, status=0/SUCCESS) When i run the comand (systemctl status splunk) it will come like this i have kill the process and  and  restarted the splunk farwarder and its running now and i have removed the  all old PIDs. can you please tell me how to resolve this issue. 
Hi all! I am comparing two things by month in year 2020 and year 2021 and I would like to have the chart starting in January. Is there any way that I can do this? Instead of what is happening below... See more...
Hi all! I am comparing two things by month in year 2020 and year 2021 and I would like to have the chart starting in January. Is there any way that I can do this? Instead of what is happening below (starts in May and in the middle January appears and so does the comparison).
I'm sure I'm missing something that's pretty obvious, and I'm hopeful that someone can show me the light. I'm running a search that references a lookup table for the search criteria as follows: ... See more...
I'm sure I'm missing something that's pretty obvious, and I'm hopeful that someone can show me the light. I'm running a search that references a lookup table for the search criteria as follows: index=foo sourcetype=bar  [ | inputlookup "cookies.csv" | rename cookie as query | fields query ] | table _time, query, field1, field2 The "cookies.csv" lookup file looks like this: cookie      <-- header name cookie1 cookie2 cookie3 ... As noted in the SPL, I'm running a text based search using the entries from the lookup file (searching on all cookies present). Once the search is complete, I produce a table with rows reflecting the index time, the matching cookie from the lookup file, and two additional fields for each event returned.  My use of the special sub-search field "query" comes from this Splunk community post: https://community.splunk.com/t5/Splunk-Search/Subsearch-fields-quot-query-quot-quot-search-quot-How-... The SPL executes correctly and returns a table with everything I'm expecting EXCEPT the cookie from the lookup file that was matched in the search; that field ("query", since I renamed it) returns as blank in the table. What do I need to change to see the cookies from the lookup file in the table?   UPDATE: If I replace... [ | inputlookup "cookies.csv" | rename cookie as query | fields query ] ...with... [ | inputlookup "cookies.csv" | fields cookie | rename cookie as search | format ] ...I have the same issue. My table shows _time, field1 and field2 for all matching events, but not the cookie entry from the lookup that was used for the match.